Code Editor Nova

I hate having to come up with titles for things. I’m bad at it. With blog posts it’s even worse, since the thing I am thinking about when I start writing is often not even the thing that the post turns out to be about, and I suck at titles anyway, and then looking back, it’s all nonsensical. Case in point: I want to write about a “code editor” that I’ve now looked at a couple of times (Nova, by Panic) but the way I got to this point now was by looking through my email inbox (which currently has 110 “unread”) emails and so I thought maybe I should write about email and why I’m not unsubscribing from the marketing list that got me to take a look at the editor. So the initial title I filled in at the top was, “Inbox Zero Is Bunk,” but then as I started typing I thought about looking through all my old posts and their titles (wondering if I’ve already written about this editor) and how my clever titles mean I have no fucking clue what most of these things are about.

Continue reading “Code Editor Nova”

Doing DB Stuff in Vapor

Having imported a bunch of tables from an Access database into a Postgres database, I want to transform them so that the information can be used by a Vapor application. Mostly, that app is just going to do exporting again, but you never know; someday someone might actually want to do something more intricate.

Continue reading “Doing DB Stuff in Vapor”

Be Careful What You Wish For

So, a lawyer decided to use ChatGPT to “help” write some papers he filed in court, but got in trouble when the included citations were bogus.

The naive interpretation of this is that the computer program is broken, and it’s a danger to society and ought to be really regulated, and actually, I don’t disagree. However, there’s even more going on, here, that shows up below the fold in the story.

The lawyer, Steven Schwartz, claims that he was unaware that ChatGPT just makes stuff up. Never mind that this has been talked about openly in non-technical forums.

So here’s a flaw in the program:

• ChatGPT is utterly ignorant of and indifferent to the notions of “true” and “false.”

And here’s a flaw in the lawyer:

• Mr. Schwartz is fully cognizant of the notion of “true” and “false” and doesn’t care about them in and of themselves, but only insofar as getting caught telling falsehoods actually gets him in trouble.

And here’s a flaw in society:

• When a student submits a paper with false citations, that student suffers (gets a bad grade). When a person tells a lie in court, that person suffers (charged with perjury). When ChatGPT makes some shit up, ChatGPT does not, in fact, suffer. There is no negative consequence, so we don’t actually expect that its behavior is going to change.

“But what about fiction? It’s not always wrong to make stuff up!” I hear you say, because it makes my own objection sound like a chorus. And that’s true. But the thing that makes fiction okay is that we have figured out larger contextual cues that allow us to signal when it’s okay and when it is not okay to tell lies.

But that’s a really hard problem to solve! It’s way easier just to program the thing to “tell the truth only” or “don’t care about the truth” and not have to worry about switching back and forth between those two. But guess which of those two strategies is harder to implement? Right, it’s definitely harder to care than to be indifferent. So here’s another flaw:

• The programmers chose to implement a chatbot that emulates human writing and which is utterly indifferent to the truth.

…which is not actually a problem all by itself but then:

• The company took this program that writes prose like a human and is utterly immune to the consequences of its speech and turned it loose on the world.

The way I see it, there has to be feedback. Punish the system, and that system definitely includes those humans who are responsible for the system. If you want to participate in society, then you have to really participate, and that looks like learning, and trying not to do harm.

The Mysterious Fix

And now that we’re home and I’ve got a decently fast and wide connection (compared to the ship)…I’ve spent a bunch of time rebuilding and testing my various layers of Docker images. And guess what? Ubuntu “jammy” with OpenSSL 3 compiles my stuff just fine. *sigh*.

Again, the lesson here is that people who write tools for development seem to assume that everyone who will use those tools is going to be online, all the time, and with a decently fast and wide connection. And that, my friends is Just Not True.

There’s still a huge section of the population of the Earth who are excluded, by implicit design, from participating in the creation of useful software.

Khaaaaaaaaan!

Okay, so I caught COVID a week ago, so I’m confined to quarters until I stop testing positive. This is fine. Except that the stateroom is a pretty effective Faraday cage and I can’t get online much. Tired of watching BBC adverts for BBC shows, and tired of playing video games, I started poking deeper and deeper into this problem with OpenSSL I’ve been having. And it turns out I found the answer! Which I can’t solve until I’m online again!

First: Kitura/SMTP is what my app uses to send transactional emails.

Second: Under the hood, SMTP uses Kitura/BlueSocket for network communications, and adds in Kitura/BlueSSLService to supply SSL support.

Third: On Linux, BlueSSLService uses Kitura/OpenSSL to bridge over to whatever version of OpenSSL is installed on the system.

And finally, Kitura/OpenSSL only supports OpenSSL versions 1.x and 2.x.

Meanwhile, Ubuntu “jammy” has started shipping with OpenSSL 3.x. So, in order to compile all my dependencies I need to downgrade the SSL library and headers to a 2.x version.

Now, I suppose I could revert to Ubuntu “focal”, except that this app gets deployed to Heroku and Heroku wants everyone up on “jammy”.

Or, I guess I could hang from a rope until Kitura/OpenSSL gets updated to support 3.x. There’s no way that could go wrong.

Pigs in Space

The other day I saw a lecture on space travel technology. The speaker was talking about how fast various kinds of propulsion systems (existing and potentially possible within the next 30-50 years) could get a vessel going, and how long that vessel would take to get to the nearby stars.

That got me thinking about the problems of sending humans any significant distance through space (well, that and playing some Kerbal Space Program). Basically, interstellar travel is going to be the domain of robots and seeds, since sending a self-supporting (and repairing!) biome for multiple generations of humans requires a fantastic amount of mass.

And then, what would the people sending the robots want them to do? As others have pointed out, so far we’ve essentially been sending out dick pics and plaques saying, “Come over to my place!”

And now I’m concerned that some bright spark is going to think, “Oh, I know, the best way to introduce humanity to the universe is to embed Chat GPT in our outbound robots.” Because why not unleash SimFratBoy — what could possibly go wrong?

Small Chunks

We’re on our way home, and there are mostly sea days from now on. So, I thought I’d go ahead and do a bit of work on my server code. First up, I can’t compile my servers any more because BlueSSL (which is required by the SMTP package, which my servers use to send transactional emails) won’t compile. My current working assumption is that the Ubuntu Linux image updated a header or a file location for OpenSSL and now BlueSSL can’t find it (it’s bitching about not being able to resolve an SSL function which OpenSSL says is right there. This is clearly outside my application’s domain and not under my control, and is reminding me why, exactly, open source sucks ass).

So anyway, I thought that it’s been a couple of weeks, maybe some maintainer somewhere has noticed this problem and fixed it. So I tried rebuilding my swift development docker image.

 => [linux/arm64 mirror-build 1/8] FROM docker.io/library/swift:5.8-jammy@sha256:eedcd40b29e61d3fa0fa333d  2080.2s
 => => resolve docker.io/library/swift:5.8-jammy@sha256:eedcd40b29e61d3fa0fa333d91bbf7a73da2b77150ef53c3a261  0.0s
 => => sha256:ca3138c028ed18fe151acf0260f2938290c30652ed60496b7b398626d354edce 33.82MB / 550.34MB          2080.2s
 => => sha256:a4377f1ec599c588f38e584d8c534227fec227db5a2480904d807697bf53e8a8 49.28MB / 170.66MB          2080.2s

Apparently, docker gives up if it can’t download a given image within 2080 seconds (34 minutes, 40 seconds).

You know what would be cool? Maybe an option to tell docker, “Look. I’m on a shitty small pipe that actually disconnects every two hours, so maybe you should just download little tiny chunks and cache them locally until you’ve got them all. And keep retrying, in case that two hour window just came along and I had to reauthenticate with the local network portal. Because, and I know this is surprising, not everyone in the world has access to a bitchin’ fast connection to the rest of the world, and it might take a little time.”

You know what? I might even pay money for that.

Hell Is Certain Other People

So, there’s this basic tactic of the pitch man: promise to tell the customer something interesting, but then go off and talk about other stuff. Periodically, the other stuff gets interrupted with a reminder that if the customer keeps listening, they’re about to get the really great thing, but first, there’s this other stuff to be talked about…

Continue reading “Hell Is Certain Other People”

More Disconnection Thinking

Here on the ship, we have satellite Internet. This means that, at a good moment, I’ve got round-trip ping times to google.com at just over half a second (~660ms) and download speeds that seem to average around a megabyte per minute. Furthermore, access to the satellite Internet is on a paid basis, so you have to authenticate with the ship’s portal before you can get to the rest of the world. Oh, and the portal requires you to re-authenticate after two hours.

Now, when we’re ashore, we can get data on our phones. We have a swell (expensive, but swell) international roaming plan, which means that when we get into port we get this swell text:

Welcome to Indonesia. You have TravelPass, which lets you use your domestic plan for $10 a day. Pay only on the days you use your mobile service. Use talk, text, or data to start your session. You’ll get 2GB of high speed data. Then enjoy unlimited 3G data for the remainder of the session.

Which is great-ish, when the cell network isn’t overloaded by, say, a couple thousand other people trying to download stuff.

Meanwhile, my phone operating system is a couple of point releases down-rev from the latest. At least one of these releases is a security update, and I’d really like to get that applied since I’m far from my home network. Oh, and how big is that software update I want to download? And it’s competing with what other baloney the apps on my phone are trying to download at the same time? There’s the “solve this particular problem” approach, which is to force my phone to download the update as soon as we hit port, and leave it plugged into a backup battery so that it thinks it’s docked and doesn’t abort the process. But that’s really ignoring the fundamental problem: sometimes, you really do need to receive a large file sent over a slow and/or unreliable link.

Back in the days of dial-up, this was handled by the transport layer and we were all a lot closer to that transport layer. An xmodem file transfer would fail at 98% of the way through if there was suddenly a bunch of line noise (like your roommate picking up the extension) and you’d have to start all over again. So, we used zmodem and were able to resume a download. You can still see this kind of feature in a browser, when you’re downloading some godawful huge file and then, in the middle of the download, you unplug your network. You can often resume the download.

This “resume download” feature isn’t part of the operating system, though (or, not for Linux, iOS, Android, Windows, or MacOS, anyway) but rather a feature of the program initiating the download — in the above example, the web browser. So, what if the thing that’s trying to do this massive download (or upload) isn’t a web browser, but a standalone application? Unless the developers of both the application you’re interacting with and the remote application it’s talking to have already thought about this problem, then the answer is probably, “You’re stuck.”

Now, it’s easy to start decrying the popular frameworks (Electron, Flutter, et al.) that hide even the possibility of handling this well deep inside some barely, if at all, accessible network library, but that’s neither productive nor even fair. It’s still a hell of a lot of work even when you have access to the file system and the low-level networking calls to send/receive a small buffer of bytes. The real situation is, most developers don’t even consider what this somewhat-connected experience is like.

When HTTP was the protocol for data transfer, you could run a caching proxy and then do silly tricks like fire off requests through the proxy for the big things and then unplug your computer. Weaknesses of this hack aside, that’s not how we do things anymore. Nowadays, we assume that there are bad guys warping all the data we send and receive, so we encrypt everything. That means that sticking a proxy in the middle just won’t work — to the apps on both ends, the proxy looks the same as a Bad Guy.

So. Developers. When you’re coming up with a server API that allows for transfer of large blobs of data, please consider a RESUME endpoint.

And, developers, when you’re writing some client for an API that provides a RESUME endpoint, make use of it.

Hobby Utility

So, we’re on a cruise ship and I got to thinking about the population and the environment and I developed a morbid curiosity. More about that below the fold. The high point, though, is that I’ve written another application — this one for the iPhone — that fills a neat, narrow niche: it’s a combination tally counter / lap timer. I call it MTBP, for Mean Time Between Phlegm, and that right there is a big clue about the rest of the post.

Continue reading “Hobby Utility”