Life on earth is characterized by exponential growth until the exhaustion of resources. When we imagine what other intelligent life might look like, some are willing to imagine "what if it's not based on DNA" or on carbon, or "doesn't require liquid water". But everyone imagines exponential growth; expanding from the surface of one planet to the whole galaxy seemingly frees you to enjoy about 25 more orders of magnitude of growth vs staying bound to the surface of a single planet.
Nikolai Kardashev created the Kardashev Scale to characterize civilizations by how much power they use—from a "Type I" civilization which captures all the solar energy falling on their home planet, to a "Type III" civilization which captures the whole effective power of a whole galaxy. In the early 21st century, we are far short of even being "Type I" (so we're effectively "Type 0"), but the future story we imagine for ourselves is exponential growth. Population has grown by, say, an order of magnitude in the last 400 years (from 800 million to 8 billion, give or take), so we should only need some 10,000 years to get to "Type III" when we simply assume continued exponential growth.
future-physics warp drive, it is optimistic to think we could
cross the galaxy once in a million years (at .2c, say), so somewhere in the
next 10,000 years our exponential growth has to stop, constrained by the cubic
way that light cones work.
A variant of this "we must expand" directive is assumed by the Drake Equation: a long-lived civilization is, by definition, so big it can't help but "leave its mark" on its containing galaxy in the form of radio signals, if nothing else. Superficially, it seems there should be many such civilizations, and they should be easy to detect if they are hard-wired for exponential growth like us, since they and their artifacts should be literally everywhere.
The Fermi Paradox, then, invites us to explain why we have no evidence of these other civilizations.
I think the answer is simple: Exponential growth, like a viral infection, is unstable. Whether it's on a scale of 10 or 400 generations, there is a final wall (a "great filter", in Fermi's terms). Only growth that is polynomial or less is sustainable on million-year timescales, particularly if (in a galaxy full of life) you actually bump fairly quickly into another civilization with a moral and/or de facto claim on the resources in some other region of space.
So we end up with a galaxy that looks like a Liu's The Dark Forest: quiet civilizations growing in volume and power consumption not at all, or only at logarithmic rates. Any hint of exponential growth, or possibly even polynomial growth, would require a response. For all we know, such a response was set in motion circa 1800 (when the atmospheric changes of the industrial revolution could have been detected) in the form of a diverted near-earth asteroid, scheduled to hit around 2100.
There must be a story in this somewhere, here's one sketch: Circa 2100, the earth's surface was rendered uninhabitable by asteroid impact. However, the nascent Mars colony and some orbital habitations survived, and by 2240 are on somewhat stable footing and ready to restart exponential growth through the solar system and into the Oort cloud. Society looks like we think it should: multicultural, accepting of all genders and sexual identities, egalitarian, access to health care, etc. Our point of view character will be a young person just coming of age in the largest city on Mars, presently on a solo tour of the solar system to rival Golden Age science fiction.
While flying by some geologically interesting moon, our narrator's ship is struck by some matter ejected from the surface. This matter forms itself into a duplicate of our narrator, and at length they learn to communicate. Let's call the alien Alice and the narrator Bob, just to make everything simpler.
Just like Bob is coming of age in the human society of Mars, Alice is (was) coming of age in their own society. Alice's society are also a bunch of exponentials who evolved in the upper atmosphere of Jupiter. But their philosopher-scientists saw the trap of exponential growth and found a solution: They adapted themselves to live at ever-slowing rates, most recently building organic computers to survive deep in the atmosphere of Jupiter, simulating a society of a trillian Jovians at a rate of about 1 day per 1,000 real-time years.
Alice, having accelerated themselves to realtime (and beyond, when they were learning Bob's language), is considered a criminal in their society and can never return without facing the punishment of being slowed all the way to zero until they have paid back all the time they "stole".
At risk of becoming too didactic, Alice tells Bob everything that is known about different societies: The exponentials, who mostly flash and fade; the polynomials, young races who may yet adapt and last; the logarithmic, long-lived civilizations who form the backbone of galactic governance; and the rumored constants, who long ago disappeared, perhaps into quantum computing devices made of dark-matter.
In any case, Alice tells Bob that if they continue to display exponential growth as a species capable of space travel, there will be terrible and large-scale retaliation from galactic culture, such as the catalyzed supernova of Sol itself; as Alice's species would end up as collateral damage, the Jovians would be in the unfortunate position of having to commit genocide against the humans first, just for self-preservation. Presumably some mid-level Jovian military types are also (lawfully) accelerated to real-time to monitor the situation.
A narrative discontinuity, and Bob is recovered from their wrecked space ship, with no sign of Alice (or is it the other way around?). Here ends the novella.
If the story should continue, we might see Bob trying to effect political change among the humans, or hear the problems faced by Alice's species' scientists, who have the eternal problem of growing their computing capacity with ever-decreasing subjective time creating impossible deadlines. (Or are the scientists lawfully accelerated, like the military?) How about drawn out parliamentary scenes where the logarithmic species debate how to deal with the infestation in Sol System? Or perhaps we go on a wild goose change for the vanished constants, believing they have the secret of zero-point energy or the like.
In the last two years, a new internet provider has entered the market in my hometown. Allo Commmunications has been laying fiber optic cables all over town for forever, and finally notified me last month that my neighborhood was ready for installation. This week, I got the service installed. Overall, I'm happy, though I'll be paying just a few bucks more a month than I'd anticipated.
- 20, 300, and 1000Mbit/s rates available ($45 - $100/month base rate)
- Router/WiFi AP included in base rate
- Static IP is just $5/month
- symmetric connection speeds
- helpful staff
- No bandwidth caps, but "if you’re breaking the law, be assured you’ll be hearing from us."
- TV and telephone, if you're into that sort of thing
- Except for static IP, you're behind "Carrier Grade NAT", so no "I'll cope with dynamic IP but still be able to SSH in"
- Your existing eqipment might not be able to keep up
- Given google's chat product also called "allo", these guys can be hard to search for
- No bandwidth caps, but "if you’re breaking the law, be assured you’ll be hearing from us."
- Small service area (9 Nebraska towns + Fort Morgan Colorado)
- No IPv6(!?), possibly deploying v6 in 2020.
- My static IP is listed in spamhaus "PBL" (but there is a self-service removal process, so it's fine)
The main headache I had was that first "the bad" bullet point: Despite having a phone app(!!) that acts like it can set up port forwarding, nothing I did could open up an incoming port. Staff were interested in helping me (including calling me back later to try one last idea), but ultimately the solution was just to add the Static IP option to my service.
The lesser headache, and the one which was totally my problem to solve, is that my firewall and NAT was being done by an older Buffalo wifi access point, WZR-HP-G300NH2 with an ancient version of DD-WRT. It simply couldn't get beyond about 160Mbit/s when doing NAT/forwarding. So I rejiggered my wires a little bit so that my i7 Linux desktop would take over those tasks. Additionally, all the "modern wifi" devices were already connecting to a newer Netgear R6220 in Access Point mode (routing functions disabled).
I had a second headache, which is apparently a decade-old bug in Linux's Intel e1000e driver. I was getting really poor rates on my internal network, and the kernel was logging "eth2: Detected Hardware Unit Hang". There are two main fixes that the internet suggests: modifying something about power saving modes, or disabling several advanced features of the NIC. In my case, I determined that using "ethtool --offload eth2 tso off" to disable tcp-segmentation-offload resolved the problem I was seeing. What's weird is that this NIC, eth2, is the one that I had been using all along; I had lots of network traffic on it for months. But the message never appeared in my local logs before I started also using "eth1" and doing NAT/packet forwarding yesterday.
Now from my desktop I get 960Mbit/s down, 720Mbit/s up (according to speedtest-cli), and 6ms pings to my office. My fastest wireless device gets somewhat over 200Mbit/s in each direction. Connecting to a VNC session at my office feels just as good as being there, which is primarily due to the extremely short packet turnaround; the bandwidth is a nice bonus though.
Right now it all feels pretty magical, and I'm looking forward to calling the cable company (spectrum) on Monday to cancel the service. I'm paying more (not quite 2x as much) but getting MUCH more service.
I'm contemplating buying one of these little embedded PCs with 2 NICs, they cost around $200 with RAM and a small SSD and it is claimed that they can forward at gigabit rates. They're literally just PCs inside (x86 CPU and BIOS booting), so all the headaches that attend little embedded ARM systems are nonexistent. But is an Intel Celeron "J1800" CPU actually up to pushing (including NAT) a quarter million packets a second?
I have a bittorrent client running with a bunch of Linux ISOs being seeded. I saw peak download rates of up to 92MB/s and typically 30-60MB/s, which is great. Right at the moment it's only clocking about 2MB/s of data "up"—the torrents seem to be pretty adequately seeded. I'm doing this primarily to see whether Allo treats "any traffic identifiable as bittorrent" as something that they'll tell you off about, or whether they are trying to distinguish "licit" from "illicit" when it comes to bittorrent traffic. I'm not sure which idea I like less.
I've made some upgrades on my Qidi printer:
- Added a borosilicate glass bed (random amazon seller)
- Upgraded to Micro Swiss MK10 All Metal Hotend
In the process of installing the new hotend, I had to align my extruders, something I had been dreading. Well, I should have done this a long time ago, because now all my problems with the left nozzle scraping over the part printed by the right nozzle are (knock on wood!) cured.
I haven't noticed a huge difference with the all-metal hotend, but hopefully the new nozzles will also have a long lifetime. The advantages are supposed to be reduced stringing (which I have noticed) and improved extrusion rates (I haven't touched my current extrusion / feed rates which were working with the original PTFE-lined hotends)
The glass bed is pretty good. I have a shim installed so that I didn't have to use the bed leveling adjustment to take up the ~3mm difference in Z. This means I gave up 3mm of Z travel instead, but that's not a big deal.
Glass is a lot less forgiving of bad bed leveling, so it took awhile to get that dialed in. (It doesn't help that the adjustment screws defy my intution every time about which direction of rotation moves the bed up!). With PLA, I am using a "60°C" bed temperature and with ABS I'm using "110°C". It's worth nothing that the top surface of the glass is much slower to actually reach its terminal temperatuer than the sensor, so pre-heating is recommended. I believe the terminal temperature of the top glass surface is also lower than the top of the aluminum, so you may need to tweak things a bit.
For adhesive, I'm using gluestick on glass for most things, and ABS juice on glass for ABS. Prints tend to be tough to remove, so maybe I need to keep experimenting.
As far as dual extrusion goes, I continue to be vexed by an apparent Cura (but maybe GPX) bug with temperature setting: It appears that Cura omits T-numbers for some temperature commands, and something in the Cura -> GPX -> Qidi toolchain applies a temperature command to the wrong hotend. Since Cura works hard to set nozzles back to a lower temperature when not in use, this can instead cause the right extruder (typically) to get set to a low temperature and then used for extruding, which can cause jams, failed prints, and delamination..
I have one machine which repeatedly wakes its display at night. My assumption is that this is due to spurious movement from the mouse.
There is not an explicit way to configure Linux (X11) so that it doesn't exit DPMS sleeps on mouse movement, but I found a tip on the internet to disable the mouse device at the XInput level when activating the screensaver, and reactivating it when the screensaver exits.
I don't run a "screensaver application" so wiring into things like dbus for notification doesn't work. Instead, I wrote an X program which polls the requested DPMS state and enables or disables the mouse device accordingly.
It remains to be seen whether this solves the problem which causes the display to turn on multiple times per night, but it might just fix it.
You'll need to customize the program by changing the "device_name" string in the source to match your own input device. If you have multiple input devices, then more extensive work will be required.
License: GPL v3+
Files currently attached to this page:
The Drake-Howard equation is:
N = R* × f# × ne × fq × dl × fc × Lwhere N = the number of elder gods in the multiverse which might wish to consume our souls and
- R* = The number of regular languages
- f# = The fraction of regular languages that match at least one string
- ne = The fraction of self-Gödelizing strings per regular language that matches at least one string
- fq = The fraction of self-Gödelizing strings that are also functional quines
- dl = The sum to infinity of the average measure of quine islands divided by Levenshtein distance
- fc = The fraction of of all functional quines which can be realized by Standard Model matter
- L = The time between Second Order Grand Conjunctions
As R* is infinite (actually, א1) and all other values are nonzero, it follows that the number of soul-devouring elder gods is infinite. The Drake-Howard Equation is also sometimes called Rule 110.
Distribution: Primary and residual human resources
To whom it may concern:
Please stop stuffing the ideas box with proposed solutions to anthropogenic climate change. As you know, the worst case consequences of ACC are predicted to be:
- Hundreds of thousands of premature deaths per year
- Millions of QUALYs lost per year
- Hundreds of millions of environmental migrants
- Regional and possible large-scale war over access to resources, including potable water
- Trillions of Euros of property damage and loss
- Loss of low-lying areas to sea-level change
The ideas box is for discussion of SERIOUS threats to humanity, such as
- Netflix cancellation of all its superhero shows
- Possible interference by Blue Hades in the Oscars awards
- Failed firmware updates on conspicuous-consumption "fitness" devices
- Those demonic wasp-things that are gestating in the abdomens of several world leaders
- Irritating advertisements interrupting our free music streaming
Thank you for your attention in this matter.
— The Management
I bought this 3D printer in the fall of 2017, though they were already calling it the "2018" model. It's a fully enclosed, dual hotend printer which is a knockoff of the Flashforge, which in turn is a knockoff of the Makerbot.
Initially, I didn't blog about this printer because I was kinda disappointed with it. It takes up a lot more desk space, spools are a pain to mount and unmount, and I initially didn't have much luck with larger prints, ABS adhesion, or dual material printing, which were three major things that I had hoped to do with the printer.
More recently I started pretty obsessively reading about the Genuine Prusa i3 MK3 with MMU2, and trying to justify the purchase. In the process, I had to admit to myself that maybe I needed to give those three major bullet points another try, either to satisfy myself that it couldn't be done, or to finally meet my goals.
Happily, I think I met my goals, and I feel much happier about this printer than I ever did in the first year of owning it. And as a bonus, I am also successfully printing flex filament on it!
First, I had found that the Z height of the two nozzles never quite matched. I was very reluctant to mess with their alignment, because it is supposed to be set at the factory. However, I eventually discovered that it was my own problem all along: I had failed to fully tighten one of the screws which attaches the hotend assembly to the X carriage, which accounted for at least 90% of the Z height mismatch of the two nozzles. I still have minor problems in vase mode if the unused nozzle brushes the print in progress, so things aren't perfect, but they're really quite a lot better.
Second, I went through the bed leveling process by using the "look at filament after it's been laid down" method, instead of the "try putting a sheet of paper under the nozzle" method. I had the nozzles way too close to the bed, and now my first layer problems were resolved. At this point, PLA and PETG both printed marvelously.
I took a detour from using Cura to Slicer PE (part of the obsessive googling about Prusa I3) and learned a few tricks, such as making the first layer line width substantially wider for better adhesion; and had some modest success with dual extrusion printing. When I returned to Cura, I found I could apply these ideas. With respect to dual extrusion printing, I found that turning OFF a bunch of Cura features gave me better results: There's no need for a priming tower when you have two hotends (vs the prusa style with multiple extruders and a single hotend), and eliminating or reducing the temperature changes for the "not currently used" nozzle actually reduced stray wisps of filament. HOWEVER, when you do this it's important that the layer time not be too long, and that both filaments are used throughout the print, so that heat creep doesn't have a chance to clog a nozzle. I did in fact get a clog this way, but fixed it easily by replacing the PTFE lining. (and, contrary to my fears, this seems to have left the Z alignment of the two nozzles just as good as before. So now I have some Micro Swiss replacement nozzles on order, because I think these are pretty worn by now)
I have also had good luck simply using different material types as support materials (e.g., PLA for PETG and ABS, and PETG for PLA), for manual (solvent-free) removal.
Finally, I printed a couple of parts for the extruders that claimed to improve printing of flexible filament. While I hadn't tried any before, I've had some luck printing with a roll of what I think is TPU (and also one non-serious jam). This includes printing a combined PLA-TPU object.
I've just started dabbling with ABS Juice as a bed adhesive, but after a couple of prints I realize I need to switch to Kapton or glass to use it, if I don't want painter's tape permanently embedded in my prints. It has given a couple of ABS prints with perfect first layer adhesion that I would not have had without it, though.
Is this a great printer? Arguably, no, you shouldn't have to do a year of fiddling with a printer before you can get great prints and use all its capabilities. Am I feeling better about this printer than ever before? Yes, I feel like I've leveled up in terms of successful printing with it.
Our total distance travelled was lower this year, mostly because our longest driving trip was taken in a rental car. The fuel economy was quite a bit lower, probably because the longer trip we did take was in the mountains of colorado, and a long climbing drive can be pretty awful for efficiency. The car also had its tires replaced this year, and the new tires may not reach the same level of low rolling resistance. Finally, it might be the case that an aging traction battery is affecting performance. Frankly, I'm disappointed with the numbers for the year.
All older entries
Website Copyright © 2004-2018 Jeff Epler