Recently, a paper claiming a break of the SIMON-32/64 cryptosystem appeared at a pre-print archive, but was soon withdrawn. Many commenters have focused on the authorship (the author names are, at best, pseudonyms; and the choice of references to Judaism and Christianity leave an odd taste in the mouth), but what of the claims? Well, they are trivially dismissed too.
The paper claimed that the break was so "dangerous" that they would not reveal the method itself; Instead, they include a table ("Table 6") which they claimed could only be created if they had in fact broken SIMON-32/64 in a way that let them recover keys based on 2 chosen plaintexts with a 2.5% (?) chance of success. They claimed that they did roughly the following:
- Choose 4, 4-byte blocks from a chosen text (supposedly, the Project Gutenberg version of the King James Bible, AKA pg10.txt)
- Call the first 2 blocks the "plaintext" and the second two blocks the "cyphertext". If the two plaintext blocks are equal, go back to step 1
- Find a SIMON-32/64 key that encrypts the plaintext to the cyphertext, using their secret method
- If such a key is found, output it
Aside from one error in Table 6, where the hexadecimal value of a cyphertext is shown incorrectly, the table does check out. (though some of the 4-grams don't appear anywhere in pg10.txt)
So, is it proof of a serious break in SIMON? No. There's another way to generate these values:
- Choose a 64-bit SIMON key arbitrarily
- Choose a first block of "plaintext" P1. Check if E(P1) is also a block. If not, continue with a fresh P1 value.
- Choose a second block of "plaintext" P2 from those not yet checked. Check if E(P2) is also a block. If not, continue with a fresh P2 value.
- If you reach this step, you have created a "Table 6" entry, with two plaintexts, two ciphertexts, and a key. The plaintext and cyphertext all come from your chosen text.
- Repeat from step 1 until you have enough entries that you have proven your point.
In fact, without any attempt at optimization (aside from trivial parallelization), an i7-4790k can find about a thousand examples in 8 seconds; about 10% of all keys yielded at least one set of matching blocks.
There are around 48,000 distinct 4-grams in "pg10.txt", so for any given key and 4-byte plaintext, there's about a 1-in-90,000 chance for it to encrypt to some other 4-gram. Since the probability is independent for each 4-gram, the odds of getting 1 are 1/2, and the odds of getting 2 are 1/4. This extremely rough calculation but not too far off the 1/10 actually obtained.
The attached program, which adapts an implementation of SIMON from github, can be built with g++-6 on Linux. It needs "pg10.txt" in the current directory. For parallelization, pass "-fopenmp". `trolled.txt` is one possible output of the program, and the few entries that I back-checked with an independent (Python) SIMON implementation also from github.
I just hope that, whatever the authors actually did to make "table 6", it didn't really take 120 days on two cluster computers.
Update: Several commenters believe the paper takes two, 8-byte blocks from the chosen text. If this is true, then even fewer of the blocks shown actually match "pg10.txt". For instance, I based my "4, 4-byte blocks" assumption on the appearance of "LORDhard" as a cyphertext. If this is the case, then my program would take about 48,000 times longer; since when you find two texts, the odds that they're "1 location" different out of 48,000 locations is about 1 in 48,000. However, since their Table 6 is full of 8-grams (and even 4-grams) that don't come from pg10.txt, I don't feel TOO bad that my program presents examples that aren't either.
Files currently attached to this page:
I've finally implemented a solution to my woes with dual extruder temperature control in Cura. In any case, I'm now having much more luck with dual extrusion and the temperature graphs that can be seen in octoprint look much more sensible.
The problem, as many have suspected, was Cura and GPX disagreeing on which "T" (hotend "tool") numbers applied to "M104" / "M109" temperature commands.
My best understanding of the problem is this:
Cura thinks that "T#" is modal only when it is the only (first?) thing on a line, while GPX thinks that e.g., "M109 T1 S200" contains a modal "T1".
This causes dual extrusion code to send temperature commands to the wrong nozzle.
This script attempts to work around the problem by tracking which "T#" lines Cura thinks are modal, and then attaching that "T#" again to any M104/M109 lines that would require it.
What seems incomplete about this story is, cura writes extrusions as "E#", which should also be moving the "T#" extruder motor. But GPX doesn't start extruding T0 with the sequence
T1 M104 T0 S175 G1 F1500 E-0.6 ; retract
So something's still missing from the explanation.
The PostProcessingPlugin is released under the terms of the AGPLv3 or higher.
It works with Ultimaker Cura 4.0, but may not work with other software or versions.
On my Linux system it is installed to ~/.local/share/cura/4.0/scripts/fixtempcommands.py but your mileage may vary. The installation of cura scripts seems a bit underdocumented, or at any rate I didn't find the documentation. The location is under one of the two folders opened when you "Help > Show Configuration Folder"
Having done so, simply enable the script to post-process your gcode in "Extensions > Post Processing > Modify G-Code". Just choose the script from the "Add a script" dropdown.
Files currently attached to this page:
Life on earth is characterized by exponential growth until the exhaustion of resources. When we imagine what other intelligent life might look like, some are willing to imagine "what if it's not based on DNA" or on carbon, or "doesn't require liquid water". But everyone imagines exponential growth; expanding from the surface of one planet to the whole galaxy seemingly frees you to enjoy about 25 more orders of magnitude of growth vs staying bound to the surface of a single planet.
Nikolai Kardashev created the Kardashev Scale to characterize civilizations by how much power they use—from a "Type I" civilization which captures all the solar energy falling on their home planet, to a "Type III" civilization which captures the whole effective power of a whole galaxy. In the early 21st century, we are far short of even being "Type I" (so we're effectively "Type 0"), but the future story we imagine for ourselves is exponential growth. Population has grown by, say, an order of magnitude in the last 400 years (from 800 million to 8 billion, give or take), so we should only need some 10,000 years to get to "Type III" when we simply assume continued exponential growth.
future-physics warp drive, it is optimistic to think we could
cross the galaxy once in a million years (at .2c, say), so somewhere in the
next 10,000 years our exponential growth has to stop, constrained by the cubic
way that light cones work.
A variant of this "we must expand" directive is assumed by the Drake Equation: a long-lived civilization is, by definition, so big it can't help but "leave its mark" on its containing galaxy in the form of radio signals, if nothing else. Superficially, it seems there should be many such civilizations, and they should be easy to detect if they are hard-wired for exponential growth like us, since they and their artifacts should be literally everywhere.
The Fermi Paradox, then, invites us to explain why we have no evidence of these other civilizations.
I think the answer is simple: Exponential growth, like a viral infection, is unstable. Whether it's on a scale of 10 or 400 generations, there is a final wall (a "great filter", in Fermi's terms). Only growth that is polynomial or less is sustainable on million-year timescales, particularly if (in a galaxy full of life) you actually bump fairly quickly into another civilization with a moral and/or de facto claim on the resources in some other region of space.
So we end up with a galaxy that looks like a Liu's The Dark Forest: quiet civilizations growing in volume and power consumption not at all, or only at logarithmic rates. Any hint of exponential growth, or possibly even polynomial growth, would require a response. For all we know, such a response was set in motion circa 1800 (when the atmospheric changes of the industrial revolution could have been detected) in the form of a diverted near-earth asteroid, scheduled to hit around 2100.
There must be a story in this somewhere, here's one sketch: Circa 2100, the earth's surface was rendered uninhabitable by asteroid impact. However, the nascent Mars colony and some orbital habitations survived, and by 2240 are on somewhat stable footing and ready to restart exponential growth through the solar system and into the Oort cloud. Society looks like we think it should: multicultural, accepting of all genders and sexual identities, egalitarian, access to health care, etc. Our point of view character will be a young person just coming of age in the largest city on Mars, presently on a solo tour of the solar system to rival Golden Age science fiction.
While flying by some geologically interesting moon, our narrator's ship is struck by some matter ejected from the surface. This matter forms itself into a duplicate of our narrator, and at length they learn to communicate. Let's call the alien Alice and the narrator Bob, just to make everything simpler.
Just like Bob is coming of age in the human society of Mars, Alice is (was) coming of age in their own society. Alice's society are also a bunch of exponentials who evolved in the upper atmosphere of Jupiter. But their philosopher-scientists saw the trap of exponential growth and found a solution: They adapted themselves to live at ever-slowing rates, most recently building organic computers to survive deep in the atmosphere of Jupiter, simulating a society of a trillian Jovians at a rate of about 1 day per 1,000 real-time years.
Alice, having accelerated themselves to realtime (and beyond, when they were learning Bob's language), is considered a criminal in their society and can never return without facing the punishment of being slowed all the way to zero until they have paid back all the time they "stole".
At risk of becoming too didactic, Alice tells Bob everything that is known about different societies: The exponentials, who mostly flash and fade; the polynomials, young races who may yet adapt and last; the logarithmic, long-lived civilizations who form the backbone of galactic governance; and the rumored constants, who long ago disappeared, perhaps into quantum computing devices made of dark-matter.
In any case, Alice tells Bob that if they continue to display exponential growth as a species capable of space travel, there will be terrible and large-scale retaliation from galactic culture, such as the catalyzed supernova of Sol itself; as Alice's species would end up as collateral damage, the Jovians would be in the unfortunate position of having to commit genocide against the humans first, just for self-preservation. Presumably some mid-level Jovian military types are also (lawfully) accelerated to real-time to monitor the situation.
A narrative discontinuity, and Bob is recovered from their wrecked space ship, with no sign of Alice (or is it the other way around?). Here ends the novella.
If the story should continue, we might see Bob trying to effect political change among the humans, or hear the problems faced by Alice's species' scientists, who have the eternal problem of growing their computing capacity with ever-decreasing subjective time creating impossible deadlines. (Or are the scientists lawfully accelerated, like the military?) How about drawn out parliamentary scenes where the logarithmic species debate how to deal with the infestation in Sol System? Or perhaps we go on a wild goose change for the vanished constants, believing they have the secret of zero-point energy or the like.
In the last two years, a new internet provider has entered the market in my hometown. Allo Commmunications has been laying fiber optic cables all over town for forever, and finally notified me last month that my neighborhood was ready for installation. This week, I got the service installed. Overall, I'm happy, though I'll be paying just a few bucks more a month than I'd anticipated.
- 20, 300, and 1000Mbit/s rates available ($45 - $100/month base rate)
- Router/WiFi AP included in base rate
- Static IP is just $5/month
- symmetric connection speeds
- helpful staff
- No bandwidth caps, but "if you’re breaking the law, be assured you’ll be hearing from us."
- TV and telephone, if you're into that sort of thing
- Except for static IP, you're behind "Carrier Grade NAT", so no "I'll cope with dynamic IP but still be able to SSH in"
- Your existing eqipment might not be able to keep up
- Given google's chat product also called "allo", these guys can be hard to search for
- No bandwidth caps, but "if you’re breaking the law, be assured you’ll be hearing from us."
- Small service area (9 Nebraska towns + Fort Morgan Colorado)
- No IPv6(!?), possibly deploying v6 in 2020.
- My static IP is listed in spamhaus "PBL" (but there is a self-service removal process, so it's fine)
The main headache I had was that first "the bad" bullet point: Despite having a phone app(!!) that acts like it can set up port forwarding, nothing I did could open up an incoming port. Staff were interested in helping me (including calling me back later to try one last idea), but ultimately the solution was just to add the Static IP option to my service.
The lesser headache, and the one which was totally my problem to solve, is that my firewall and NAT was being done by an older Buffalo wifi access point, WZR-HP-G300NH2 with an ancient version of DD-WRT. It simply couldn't get beyond about 160Mbit/s when doing NAT/forwarding. So I rejiggered my wires a little bit so that my i7 Linux desktop would take over those tasks. Additionally, all the "modern wifi" devices were already connecting to a newer Netgear R6220 in Access Point mode (routing functions disabled).
I had a second headache, which is apparently a decade-old bug in Linux's Intel e1000e driver. I was getting really poor rates on my internal network, and the kernel was logging "eth2: Detected Hardware Unit Hang". There are two main fixes that the internet suggests: modifying something about power saving modes, or disabling several advanced features of the NIC. In my case, I determined that using "ethtool --offload eth2 tso off" to disable tcp-segmentation-offload resolved the problem I was seeing. What's weird is that this NIC, eth2, is the one that I had been using all along; I had lots of network traffic on it for months. But the message never appeared in my local logs before I started also using "eth1" and doing NAT/packet forwarding yesterday.
Now from my desktop I get 960Mbit/s down, 720Mbit/s up (according to speedtest-cli), and 6ms pings to my office. My fastest wireless device gets somewhat over 200Mbit/s in each direction. Connecting to a VNC session at my office feels just as good as being there, which is primarily due to the extremely short packet turnaround; the bandwidth is a nice bonus though.
Right now it all feels pretty magical, and I'm looking forward to calling the cable company (spectrum) on Monday to cancel the service. I'm paying more (not quite 2x as much) but getting MUCH more service.
I'm contemplating buying one of these little embedded PCs with 2 NICs, they cost around $200 with RAM and a small SSD and it is claimed that they can forward at gigabit rates. They're literally just PCs inside (x86 CPU and BIOS booting), so all the headaches that attend little embedded ARM systems are nonexistent. But is an Intel Celeron "J1800" CPU actually up to pushing (including NAT) a quarter million packets a second?
I have a bittorrent client running with a bunch of Linux ISOs being seeded. I saw peak download rates of up to 92MB/s and typically 30-60MB/s, which is great. Right at the moment it's only clocking about 2MB/s of data "up"—the torrents seem to be pretty adequately seeded. I'm doing this primarily to see whether Allo treats "any traffic identifiable as bittorrent" as something that they'll tell you off about, or whether they are trying to distinguish "licit" from "illicit" when it comes to bittorrent traffic. I'm not sure which idea I like less.
I've made some upgrades on my Qidi printer:
- Added a borosilicate glass bed (random amazon seller)
- Upgraded to Micro Swiss MK10 All Metal Hotend
In the process of installing the new hotend, I had to align my extruders, something I had been dreading. Well, I should have done this a long time ago, because now all my problems with the left nozzle scraping over the part printed by the right nozzle are (knock on wood!) cured.
I haven't noticed a huge difference with the all-metal hotend, but hopefully the new nozzles will also have a long lifetime. The advantages are supposed to be reduced stringing (which I have noticed) and improved extrusion rates (I haven't touched my current extrusion / feed rates which were working with the original PTFE-lined hotends)
The glass bed is pretty good. I have a shim installed so that I didn't have to use the bed leveling adjustment to take up the ~3mm difference in Z. This means I gave up 3mm of Z travel instead, but that's not a big deal.
Glass is a lot less forgiving of bad bed leveling, so it took awhile to get that dialed in. (It doesn't help that the adjustment screws defy my intution every time about which direction of rotation moves the bed up!). With PLA, I am using a "60°C" bed temperature and with ABS I'm using "110°C". It's worth nothing that the top surface of the glass is much slower to actually reach its terminal temperatuer than the sensor, so pre-heating is recommended. I believe the terminal temperature of the top glass surface is also lower than the top of the aluminum, so you may need to tweak things a bit.
For adhesive, I'm using gluestick on glass for most things, and ABS juice on glass for ABS. Prints tend to be tough to remove, so maybe I need to keep experimenting.
As far as dual extrusion goes, I continue to be vexed by an apparent Cura (but maybe GPX) bug with temperature setting: It appears that Cura omits T-numbers for some temperature commands, and something in the Cura -> GPX -> Qidi toolchain applies a temperature command to the wrong hotend. Since Cura works hard to set nozzles back to a lower temperature when not in use, this can instead cause the right extruder (typically) to get set to a low temperature and then used for extruding, which can cause jams, failed prints, and delamination..
I have one machine which repeatedly wakes its display at night. My assumption is that this is due to spurious movement from the mouse.
There is not an explicit way to configure Linux (X11) so that it doesn't exit DPMS sleeps on mouse movement, but I found a tip on the internet to disable the mouse device at the XInput level when activating the screensaver, and reactivating it when the screensaver exits.
I don't run a "screensaver application" so wiring into things like dbus for notification doesn't work. Instead, I wrote an X program which polls the requested DPMS state and enables or disables the mouse device accordingly.
It remains to be seen whether this solves the problem which causes the display to turn on multiple times per night, but it might just fix it.
You'll need to customize the program by changing the "device_name" string in the source to match your own input device. If you have multiple input devices, then more extensive work will be required.
License: GPL v3+
Files currently attached to this page:
The Drake-Howard equation is:
N = R* × f# × ne × fq × dl × fc × Lwhere N = the number of elder gods in the multiverse which might wish to consume our souls and
- R* = The number of regular languages
- f# = The fraction of regular languages that match at least one string
- ne = The fraction of self-Gödelizing strings per regular language that matches at least one string
- fq = The fraction of self-Gödelizing strings that are also functional quines
- dl = The sum to infinity of the average measure of quine islands divided by Levenshtein distance
- fc = The fraction of of all functional quines which can be realized by Standard Model matter
- L = The time between Second Order Grand Conjunctions
As R* is infinite (actually, א1) and all other values are nonzero, it follows that the number of soul-devouring elder gods is infinite. The Drake-Howard Equation is also sometimes called Rule 110.
Distribution: Primary and residual human resources
To whom it may concern:
Please stop stuffing the ideas box with proposed solutions to anthropogenic climate change. As you know, the worst case consequences of ACC are predicted to be:
- Hundreds of thousands of premature deaths per year
- Millions of QUALYs lost per year
- Hundreds of millions of environmental migrants
- Regional and possible large-scale war over access to resources, including potable water
- Trillions of Euros of property damage and loss
- Loss of low-lying areas to sea-level change
The ideas box is for discussion of SERIOUS threats to humanity, such as
- Netflix cancellation of all its superhero shows
- Possible interference by Blue Hades in the Oscars awards
- Failed firmware updates on conspicuous-consumption "fitness" devices
- Those demonic wasp-things that are gestating in the abdomens of several world leaders
- Irritating advertisements interrupting our free music streaming
Thank you for your attention in this matter.
— The Management
All older entries
Website Copyright © 2004-2018 Jeff Epler