Our total distance traveled was lower again this year, as was the fuel economy. Once again, I'm disappointed with the overall fuel economy--It's the lowest number of miles, but the third from the lowest number of gallons consumed.
Distribution: General / non-confidential
It pleases Us to announce that the following experimental research programmes will each receive £10,000,000 guaranteed and inflation adjusted anually for three years:
Two-diode imaging (Dean, Salem, Chen)Single-pixel cameras have been an area of research for two decades, and it has been understood for even longer how to use a reverse-biased LED to measure ambient light levels. Combining the two technologies with SCORPION STARE could potentially allow 'STARE devices to be miniaturized into millimeter-scale passive devices.
Deep Stare (Wu, Angelov, Soleymani)Technology like Deep Convolutional Networks, AGAINs, and the like have made it possible to create plausible images from archetypes. Deep Stare, if its promise can be realized, will allow remote / non-line-of-sight deployment of STARE technology against enemies of the state. All that is required is an accurate image generating model, coupled with petascale iteration, to neutralize a threat based on a description. Imagine that you can simply input "a 25-year old caucasian male insurgent stands in front of a High street shop, waiting for the right moment to unleash the demonic threads even now coiling in his belly", and before he can act, he is turned into radioactive slag.
Stereo acoustic imaging (Toth, Ben-Amram, Toth)It will happen to one woman out of four in her lifetime: that hoped-for pregnancy converted to tragedy when her gynecologist explains that the embryo is (the wrong kind of) nonhuman. Stereo acoustic imaging promises to allow easy, outpatient treatment for our most precious resource, fertile females.
Civilian ApplicationsWe also wish to announce funding of the following programmes to create civilian applications for SCORPION STARE technology. Each programme will receive £100,000 in the first year, with possible increase to £500,000 per year based on progress. In exchange, the Crown is awarded 27% of eventual revenues, in purpetuity.
- Chain-free "chainsaw"
- Single-use heating pads
- Assisted suicide VR Goggles
- Insect Control Cameras
- Artificial Tanning Beds
His Majesty would like to congratulate all surviving entrants for their inventive grant proposals.
There are lots of instructions for setting up a Stratum 1 NTP server on your Raspberry Pi. A lot. After much research, I found a simple configuration that uses a single ntp reference clock and does not involve gpsd, but uses both NMEA and PPS for the most accurate timekeeping possible. NMEA+PPS is a mode of NTP's "Generic NMEA GPS Receiver".
Here's what worked for me:
Start with a pi3 and raspbian stretch. Add a GPS with TTL-level PPS and 9600 baud NMEA outputs.
Hook up you GPS: GND and +3.3V or +5V (according to the specifiics of your GPS) to any matching pin, the GPS's TX output to the Pi's RX input on pin 10 of the header, and the GPS's PPS output to the Pi's pin 12. (You can hook up your GPS's RX input to the Pi's TX output on pin 8 if you like, but I don't think this is necessary)
Using raspi-config, disable the serial console and enable the serial port hardware.
Manually edit /boot/config.txt to add: dtoverlay=pps-gpio — If you have to (or prefer to) use a different pin for pps, you can apparently specify it as dtoverlay=pps-gpio,gpiopin=##, where ## is the internal numbering of the GPIO pins, not the number on the 40-pin header.
Manually edit /etc/modules and add a line that reads pps-gpio — According to some sources, this step is not necessary.
Install ntpd with apt-get.
Delete all the content in /etc/dhcp/dhclient-exit-hooks.d/ntp, leaving an empty file.
Remove the file /run/ntp.conf.dhcp if it exists.
Edit /etc/udev/rules.d/99-com.rules. On each of the two lines that ends , SYMLINK+="serial%c", append , SYMLINK+="gps%c"
Create /etc/udev/rules.d/99-ppsd.rules with the content SUBSYSTEM=="pps", GROUP="dialout", MODE="0660", SYMLINK+="gpspps%n"
In /etc/ntp.conf, add a stanza to access the GPS:
# gps clock via serial /dev/gps0 and /dev/gpspps0 server 127.127.20.0 minpoll 3 maxpoll 3 mode 16 burst iburst prefer fudge 127.127.20.0 refid GPS time2 +.250 flag1 1(leave the "pool" line(s) or other ntp server lines; if your pi doesn't have a battery-backed RTC, you need a way to get the correct time initially, before a GPS fix may be available!)
"time2" doesn't seem critical with this setup, because the PPS time is preferred over the serial reception time. "minpoll", "maxpoll", "burst" and "iburst" may be superstitious and unnecessary.
Reboot now and have a look at `ntpq -c peers`. You should see something like this:
remote refid st t when poll reach delay offset jitter ============================================================================== oGPS_NMEA(0) .GPS. 0 l 1 8 377 0.000 -0.001 0.003 +ntp.u .GPS. 1 u 2 8 377 1.104 0.013 0.064
"GPS_NMEA" is selected as peer and is using PPS (this is what "o" in the first column means). delay, offset, and jitter should all be extremely small (Here, .001 is 1 microsecond). "ntp.u" is my other local stratum-1 NTP server. The "+" indicates it's in good agreement with the local GPS. If you use pool, you will see multiple lines here; some may have "+" and some may have "-".
If something's not working, you will get, " " (blank), "-" or "x" next to GPS_NMEA. If it's got "*" then NMEA is working but PPS isn't. Now you get to do things like debug whether the PPS signal is working properly according to ppstest, whether NMEA messages are actually coming in at 9600 baud, etc. Or you can follow one of those other guides. :wink:
Here's how the local time to GPS offset has looked over the last 10 hours or so — I find it awesome that my computer appears to be synchronized to GPS to within ±5 microseconds almost all of the time:
I was inspired by this watch face design (I think that's the original version) and by the arrival of a "OCXO", a very reliable time keeping circuit, to finally make an electronic clock.
Accuracy: Between hardware and software tuning, the OCXO keeps time with an accuracy of possibly better than 100 microseconds per day (loses or gains well less than a half second per year) (Yes, I'm deliberately ignoring a lot about crystal aging here!)
Precision: The time displayed is to the nearest minute, and the touchscreen setting mechanism is (deliberately?) poor, making it hard to set the time closer than +- 2 minutes or so. Oh, and it takes a good fraction of a second to update the screen anytime it changes. (The best way to set it seems to be to wait until a few seconds before 6AM/6PM and plug it in, since it boots with that time showing)
The dial as an animation (1 revolution = 12 hours)
Recently, a paper claiming a break of the SIMON-32/64 cryptosystem appeared at a pre-print archive, but was soon withdrawn. Many commenters have focused on the authorship (the author names are, at best, pseudonyms; and the choice of references to Judaism and Christianity leave an odd taste in the mouth), but what of the claims? Well, they are trivially dismissed too.
The paper claimed that the break was so "dangerous" that they would not reveal the method itself; Instead, they include a table ("Table 6") which they claimed could only be created if they had in fact broken SIMON-32/64 in a way that let them recover keys based on 2 chosen plaintexts with a 2.5% (?) chance of success. They claimed that they did roughly the following:
- Choose 4, 4-byte blocks from a chosen text (supposedly, the Project Gutenberg version of the King James Bible, AKA pg10.txt)
- Call the first 2 blocks the "plaintext" and the second two blocks the "cyphertext". If the two plaintext blocks are equal, go back to step 1
- Find a SIMON-32/64 key that encrypts the plaintext to the cyphertext, using their secret method
- If such a key is found, output it
Aside from one error in Table 6, where the hexadecimal value of a cyphertext is shown incorrectly, the table does check out. (though some of the 4-grams don't appear anywhere in pg10.txt)
So, is it proof of a serious break in SIMON? No. There's another way to generate these values:
- Choose a 64-bit SIMON key arbitrarily
- Choose a first block of "plaintext" P1. Check if E(P1) is also a block. If not, continue with a fresh P1 value.
- Choose a second block of "plaintext" P2 from those not yet checked. Check if E(P2) is also a block. If not, continue with a fresh P2 value.
- If you reach this step, you have created a "Table 6" entry, with two plaintexts, two ciphertexts, and a key. The plaintext and cyphertext all come from your chosen text.
- Repeat from step 1 until you have enough entries that you have proven your point.
In fact, without any attempt at optimization (aside from trivial parallelization), an i7-4790k can find about a thousand examples in 8 seconds; about 10% of all keys yielded at least one set of matching blocks.
There are around 48,000 distinct 4-grams in "pg10.txt", so for any given key and 4-byte plaintext, there's about a 1-in-90,000 chance for it to encrypt to some other 4-gram. Since the probability is independent for each 4-gram, the odds of getting 1 are 1/2, and the odds of getting 2 are 1/4. This extremely rough calculation but not too far off the 1/10 actually obtained.
The attached program, which adapts an implementation of SIMON from github, can be built with g++-6 on Linux. It needs "pg10.txt" in the current directory. For parallelization, pass "-fopenmp". `trolled.txt` is one possible output of the program, and the few entries that I back-checked with an independent (Python) SIMON implementation also from github.
I just hope that, whatever the authors actually did to make "table 6", it didn't really take 120 days on two cluster computers.
Update: Several commenters believe the paper takes two, 8-byte blocks from the chosen text. If this is true, then even fewer of the blocks shown actually match "pg10.txt". For instance, I based my "4, 4-byte blocks" assumption on the appearance of "LORDhard" as a cyphertext. If this is the case, then my program would take about 48,000 times longer; since when you find two texts, the odds that they're "1 location" different out of 48,000 locations is about 1 in 48,000. However, since their Table 6 is full of 8-grams (and even 4-grams) that don't come from pg10.txt, I don't feel TOO bad that my program presents examples that aren't either.
Files currently attached to this page:
I've finally implemented a solution to my woes with dual extruder temperature control in Cura. In any case, I'm now having much more luck with dual extrusion and the temperature graphs that can be seen in octoprint look much more sensible.
The problem, as many have suspected, was Cura and GPX disagreeing on which "T" (hotend "tool") numbers applied to "M104" / "M109" temperature commands.
My best understanding of the problem is this:
Cura thinks that "T#" is modal only when it is the only (first?) thing on a line, while GPX thinks that e.g., "M109 T1 S200" contains a modal "T1".
This causes dual extrusion code to send temperature commands to the wrong nozzle.
This script attempts to work around the problem by tracking which "T#" lines Cura thinks are modal, and then attaching that "T#" again to any M104/M109 lines that would require it.
What seems incomplete about this story is, cura writes extrusions as "E#", which should also be moving the "T#" extruder motor. But GPX doesn't start extruding T0 with the sequence
T1 M104 T0 S175 G1 F1500 E-0.6 ; retract
So something's still missing from the explanation.
The PostProcessingPlugin is released under the terms of the AGPLv3 or higher.
It works with Ultimaker Cura 4.0, but may not work with other software or versions.
On my Linux system it is installed to ~/.local/share/cura/4.0/scripts/fixtempcommands.py but your mileage may vary. The installation of cura scripts seems a bit underdocumented, or at any rate I didn't find the documentation. The location is under one of the two folders opened when you "Help > Show Configuration Folder"
Having done so, simply enable the script to post-process your gcode in "Extensions > Post Processing > Modify G-Code". Just choose the script from the "Add a script" dropdown.
Files currently attached to this page:
Life on earth is characterized by exponential growth until the exhaustion of resources. When we imagine what other intelligent life might look like, some are willing to imagine "what if it's not based on DNA" or on carbon, or "doesn't require liquid water". But everyone imagines exponential growth; expanding from the surface of one planet to the whole galaxy seemingly frees you to enjoy about 25 more orders of magnitude of growth vs staying bound to the surface of a single planet.
Nikolai Kardashev created the Kardashev Scale to characterize civilizations by how much power they use—from a "Type I" civilization which captures all the solar energy falling on their home planet, to a "Type III" civilization which captures the whole effective power of a whole galaxy. In the early 21st century, we are far short of even being "Type I" (so we're effectively "Type 0"), but the future story we imagine for ourselves is exponential growth. Population has grown by, say, an order of magnitude in the last 400 years (from 800 million to 8 billion, give or take), so we should only need some 10,000 years to get to "Type III" when we simply assume continued exponential growth.
future-physics warp drive, it is optimistic to think we could
cross the galaxy once in a million years (at .2c, say), so somewhere in the
next 10,000 years our exponential growth has to stop, constrained by the cubic
way that light cones work.
A variant of this "we must expand" directive is assumed by the Drake Equation: a long-lived civilization is, by definition, so big it can't help but "leave its mark" on its containing galaxy in the form of radio signals, if nothing else. Superficially, it seems there should be many such civilizations, and they should be easy to detect if they are hard-wired for exponential growth like us, since they and their artifacts should be literally everywhere.
The Fermi Paradox, then, invites us to explain why we have no evidence of these other civilizations.
I think the answer is simple: Exponential growth, like a viral infection, is unstable. Whether it's on a scale of 10 or 400 generations, there is a final wall (a "great filter", in Fermi's terms). Only growth that is polynomial or less is sustainable on million-year timescales, particularly if (in a galaxy full of life) you actually bump fairly quickly into another civilization with a moral and/or de facto claim on the resources in some other region of space.
So we end up with a galaxy that looks like a Liu's The Dark Forest: quiet civilizations growing in volume and power consumption not at all, or only at logarithmic rates. Any hint of exponential growth, or possibly even polynomial growth, would require a response. For all we know, such a response was set in motion circa 1800 (when the atmospheric changes of the industrial revolution could have been detected) in the form of a diverted near-earth asteroid, scheduled to hit around 2100.
There must be a story in this somewhere, here's one sketch: Circa 2100, the earth's surface was rendered uninhabitable by asteroid impact. However, the nascent Mars colony and some orbital habitations survived, and by 2240 are on somewhat stable footing and ready to restart exponential growth through the solar system and into the Oort cloud. Society looks like we think it should: multicultural, accepting of all genders and sexual identities, egalitarian, access to health care, etc. Our point of view character will be a young person just coming of age in the largest city on Mars, presently on a solo tour of the solar system to rival Golden Age science fiction.
While flying by some geologically interesting moon, our narrator's ship is struck by some matter ejected from the surface. This matter forms itself into a duplicate of our narrator, and at length they learn to communicate. Let's call the alien Alice and the narrator Bob, just to make everything simpler.
Just like Bob is coming of age in the human society of Mars, Alice is (was) coming of age in their own society. Alice's society are also a bunch of exponentials who evolved in the upper atmosphere of Jupiter. But their philosopher-scientists saw the trap of exponential growth and found a solution: They adapted themselves to live at ever-slowing rates, most recently building organic computers to survive deep in the atmosphere of Jupiter, simulating a society of a trillian Jovians at a rate of about 1 day per 1,000 real-time years.
Alice, having accelerated themselves to realtime (and beyond, when they were learning Bob's language), is considered a criminal in their society and can never return without facing the punishment of being slowed all the way to zero until they have paid back all the time they "stole".
At risk of becoming too didactic, Alice tells Bob everything that is known about different societies: The exponentials, who mostly flash and fade; the polynomials, young races who may yet adapt and last; the logarithmic, long-lived civilizations who form the backbone of galactic governance; and the rumored constants, who long ago disappeared, perhaps into quantum computing devices made of dark-matter.
In any case, Alice tells Bob that if they continue to display exponential growth as a species capable of space travel, there will be terrible and large-scale retaliation from galactic culture, such as the catalyzed supernova of Sol itself; as Alice's species would end up as collateral damage, the Jovians would be in the unfortunate position of having to commit genocide against the humans first, just for self-preservation. Presumably some mid-level Jovian military types are also (lawfully) accelerated to real-time to monitor the situation.
A narrative discontinuity, and Bob is recovered from their wrecked space ship, with no sign of Alice (or is it the other way around?). Here ends the novella.
If the story should continue, we might see Bob trying to effect political change among the humans, or hear the problems faced by Alice's species' scientists, who have the eternal problem of growing their computing capacity with ever-decreasing subjective time creating impossible deadlines. (Or are the scientists lawfully accelerated, like the military?) How about drawn out parliamentary scenes where the logarithmic species debate how to deal with the infestation in Sol System? Or perhaps we go on a wild goose change for the vanished constants, believing they have the secret of zero-point energy or the like.
In the last two years, a new internet provider has entered the market in my hometown. Allo Commmunications has been laying fiber optic cables all over town for forever, and finally notified me last month that my neighborhood was ready for installation. This week, I got the service installed. Overall, I'm happy, though I'll be paying just a few bucks more a month than I'd anticipated.
- 20, 300, and 1000Mbit/s rates available ($45 - $100/month base rate)
- Router/WiFi AP included in base rate
- Static IP is just $5/month
- symmetric connection speeds
- helpful staff
- No bandwidth caps, but "if you’re breaking the law, be assured you’ll be hearing from us."
- TV and telephone, if you're into that sort of thing
- Except for static IP, you're behind "Carrier Grade NAT", so no "I'll cope with dynamic IP but still be able to SSH in"
- Your existing eqipment might not be able to keep up
- Given google's chat product also called "allo", these guys can be hard to search for
- No bandwidth caps, but "if you’re breaking the law, be assured you’ll be hearing from us."
- Small service area (9 Nebraska towns + Fort Morgan Colorado)
- No IPv6(!?), possibly deploying v6 in 2020.
- My static IP is listed in spamhaus "PBL" (but there is a self-service removal process, so it's fine)
The main headache I had was that first "the bad" bullet point: Despite having a phone app(!!) that acts like it can set up port forwarding, nothing I did could open up an incoming port. Staff were interested in helping me (including calling me back later to try one last idea), but ultimately the solution was just to add the Static IP option to my service.
The lesser headache, and the one which was totally my problem to solve, is that my firewall and NAT was being done by an older Buffalo wifi access point, WZR-HP-G300NH2 with an ancient version of DD-WRT. It simply couldn't get beyond about 160Mbit/s when doing NAT/forwarding. So I rejiggered my wires a little bit so that my i7 Linux desktop would take over those tasks. Additionally, all the "modern wifi" devices were already connecting to a newer Netgear R6220 in Access Point mode (routing functions disabled).
I had a second headache, which is apparently a decade-old bug in Linux's Intel e1000e driver. I was getting really poor rates on my internal network, and the kernel was logging "eth2: Detected Hardware Unit Hang". There are two main fixes that the internet suggests: modifying something about power saving modes, or disabling several advanced features of the NIC. In my case, I determined that using "ethtool --offload eth2 tso off" to disable tcp-segmentation-offload resolved the problem I was seeing. What's weird is that this NIC, eth2, is the one that I had been using all along; I had lots of network traffic on it for months. But the message never appeared in my local logs before I started also using "eth1" and doing NAT/packet forwarding yesterday.
Now from my desktop I get 960Mbit/s down, 720Mbit/s up (according to speedtest-cli), and 6ms pings to my office. My fastest wireless device gets somewhat over 200Mbit/s in each direction. Connecting to a VNC session at my office feels just as good as being there, which is primarily due to the extremely short packet turnaround; the bandwidth is a nice bonus though.
Right now it all feels pretty magical, and I'm looking forward to calling the cable company (spectrum) on Monday to cancel the service. I'm paying more (not quite 2x as much) but getting MUCH more service.
I'm contemplating buying one of these little embedded PCs with 2 NICs, they cost around $200 with RAM and a small SSD and it is claimed that they can forward at gigabit rates. They're literally just PCs inside (x86 CPU and BIOS booting), so all the headaches that attend little embedded ARM systems are nonexistent. But is an Intel Celeron "J1800" CPU actually up to pushing (including NAT) a quarter million packets a second?
I have a bittorrent client running with a bunch of Linux ISOs being seeded. I saw peak download rates of up to 92MB/s and typically 30-60MB/s, which is great. Right at the moment it's only clocking about 2MB/s of data "up"—the torrents seem to be pretty adequately seeded. I'm doing this primarily to see whether Allo treats "any traffic identifiable as bittorrent" as something that they'll tell you off about, or whether they are trying to distinguish "licit" from "illicit" when it comes to bittorrent traffic. I'm not sure which idea I like less.
All older entries
Website Copyright © 2004-2018 Jeff Epler