Web dev in DC http://ross.karchner.com
926 stories
·
16 followers

Man Missing: Scarlet Crow's Fateful Visit to Washington, D.C.

1 Share

On the night of February 24, 1867 in the nation’s capital, Scarlet Crow, a visiting Sioux chief, mysteriously disappeared. No one knows for sure what happened. Sisseton-Wahpeton Oyate oral history proposed that he was kidnapped, while the Evening Star newspaper put forth that he had simply wandered and gotten lost. What is indisputable, however, is that after that night, Scarlet Crow was never seen alive again. 

Read the whole story
rosskarchner
5 days ago
reply
DC-ish
Share this story
Delete

Quadlet, an easier way to run system containers

1 Share

Kubernetes and its likes is an excellent way to run containers in the cloud. And for development and testing, manually running podman is very useful (although do check out toolbox). But sometimes you really want to run a system service using a container. This could be on your laptop, NUC, or maybe some kind of edge or embedded device. The container should automatically start at boot, restart on errors, etc.

The recommended way to do this is to run podman from a systemd service. A lot of work has gone into podman to make this work well (and it constantly improves), and there are lots of documentation around the internet on how to do this. Additionally podman itself has some tools to help starting out (see podman generate systemd). But, the end result of all of these is that you get a complex, hard to understand systemd unit file with a very long “podman run” command that you have to maintain.

There has to be a simpler way!
Enter quadlet.

Quadlet is a systemd generator that takes a container description and automatically generates a systemd service file from it. The container description is in the systemd unit file format and describes how you want to run the container (i.e. what image, which ports exposed, etc), as well as standard systemd options, like dependencies. However, it doesn’t need to bother with technical details about how a container gets created or how it integrates with systemd, which makes the file much easier to understand and maintain.

This is easiest demonstrated by an example:

[Unit]
Description=Redis container

[Container]
Image=docker.io/redis
PublishPort=6379:6379
User=999

[Service]
Restart=always

[Install]
WantedBy=local.target

If you install the above in a file called
/etc/containers/systemd/redis.container (or
/usr/share/containers/systemd/redis.container) then, during boot (and at systemctl daemon-reload time), this is used to generate the file /run/systemd/generator/redis.service, which is then made
available as a regular service.

To get a feeling for this, the above container file generates the following service file:

# Automatically generated by quadlet-generator
[Unit]
Description=Redis container
RequiresMountsFor=%t/containers
SourcePath=/etc/containers/systemd/redis.container

[X-Container]
Image=docker.io/redis
PublishPort=6379:6379
User=999

[Service]
Restart=always
Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStartPre=-rm -f %t/%N.cid
ExecStopPost=-/usr/bin/podman rm -f -i --cidfile=%t/%N.cid
ExecStopPost=-rm -f %t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name=systemd-%N --cidfile=%t/%N.cid --replace --rm -d --log-driver journald --pull=never --runtime /usr/bin/crun --cgroups=split --tz=local --init --sdnotify=conmon --security-opt=no-new-privileges --cap-drop=all --mount type=tmpfs,tmpfs-size=512M,destination=/tmp --user 999 --uidmap 999:999:1 --uidmap 0:0:1 --uidmap 1:362144:998 --uidmap 1000:363142:64538 --gidmap 0:0:1 --gidmap 1:362144:65536 -p=6379:6379 docker.io/redis

[Install]
WantedBy=local.target

Once started it looks like a regular service:

● redis.service - Redis container
Loaded: loaded (/etc/containers/systemd/redis.container; generated)
Active: active (running) since Tue 2021-10-12 12:34:14; 1s ago
Main PID: 1559371 (conmon)
Tasks: 8 (limit: 38373)
Memory: 32.0M
CPU: 387ms
CGroup: /system.slice/redis.service
├─container
│ ├─1559375 /dev/init -- docker-entrypoint.sh redis-server
│ └─1559489 "redis-server *:6379"
└─supervisor
  └─1559371 /usr/bin/conmon --api-version 1 -c 24184463a9>

In practice you don’t need to care about the generated file, all you need to maintain is the container file. In fact, over time as podman/systemd integration is improved it may generate slightly different files to take advantage of the new features.

In addition to being easier to understand, quadlet comes with a set of defaults for how the container is run that better fit the usecase of running system services. For example, it defaults to running without any capabilities, it has a basic init process in the container, it uses the journal log driver, and it sets up the cgroups in a mode that best matches what systemd needs.

Right now this is a separate project, but I’ve been in touch with the podman developers, and there is some discussions of how to make this feature part of podman instead. But, until then you can use it from github.com/containers/quadlet, and I have made a COPR build available for experimenting.

For more information see the docs linked from the README.

Read the whole story
rosskarchner
8 days ago
reply
DC-ish
Share this story
Delete

The Web vs Unix

1 Share

I would like to share my perspective on what’s wrong with Unix. This time by comparing and contrasting it to the Web.

User Agent

With some amount of stretch, one could say that the equivalent of the browser (user agent) would be a shell with a terminal. You interact with it and it does things for you, like the browser.

User Interface

The web browser is capable of rendering textual and graphical content. Unix shell relies on the terminal (usually emulator mimicking decades old hardware) for user interface and in most cases is limited to fixed width font text.

The main difference is how you can interact with the content. In the shell – you can’t. The shell is out of the game. The content that you see on your screen goes from a program and straight into the terminal, bypassing the shell. Compared to a browser, that sounds insane: you are just unable to interact with the content that you see. Ironically enough, this is happening in what’s called “interactive shell“. Some terminals match the text with predefined set of patterns and allow some minimal interaction such as ability to click on a link to open it in a browser.

The browser is a strict superset when we look at interaction capabilities: you can type in and you can interact with the objects on the screen by clicking them. What an amazing concept! Maybe some day shells will be able to that too! Meanwhile, in the shell, you type your commands and get a dump of text back, with rare exceptions.

I would summarize that the shell is a shitty user agent. 💩

I already can hear the coming “shell is not supposed to do it” argument. My opinion: shell is supposed to do whatever is needed for me to be productive. If it’s your “Unix Philosophy” vs me being productive then you can continue to use Notepad (or ed, the standard text editor, for that matter) and I would be using an IDE, OK?

Layout Engine

They also call it browser engine. That’s because on the web it’s in the browser. But where is it on Unix? Everywhere. Yes, the “Make each program do one thing well” is out of the window long ago. Each program does (hopefully) one thing and then it also does the layout of the output.

Each program has the following main options for handling the input/output:

  1. Primitive output – the program dumps some text on standard output. Let’s include colored text here. It’s just some additional color codes. This is equivalent to not having a layout engine. Sample program in this category: grep.
  2. Interactive UI – the program uses ncurses or similar library. It’s relatively small number of programs.
  3. Layout engine – the program contains some form of a layout engine. This is pretty common. Sample programs in this category: ls, ps, top, diff (columns output), wc, …

Common issues with the “layout engines” causing unpleasant broken view in Unix include:

  1. Improper handling of data which is wider than some hard coded fixed value
  2. Improper handling of Unicode
  3. Failure to accommodate for “unexpected” terminal escape codes in the input (which after processing find their way to the output in utilities like sed)

TCP/IP

Pipe

Let’s talk about pipes. Before everybody gets offended and says pipes is the sacred cow best feature of Unix. Yes, it probably is.

Pipes would roughly correspond to the TCP/IP protocols.  Pipes deliver data. For now, let’s leave alone the fact that they are unidirectional as opposed to TCP, which is bidirectional.

Since the web is a stack of protocols, the obvious question would be how other parts of the stack correspond? Read on.

HTTP

HTTP would correspond to text. Well, mostly text. Sometimes null character separated records. Sometimes something else. That’s the standard “format” to communicate between Unix applications.

“Write programs to handle text streams, because that is a universal interface.” – Basics of the Unix Philosophy.

The original claim is that text is the best for interoperability: large number of utilities have text as input and also output text, manipulating it in many different ways in between. Sounds like a dream. Except in reality this dream turned out to be a nightmare.

Incompatible, ad-hoc formats

Text on Unix is not a single format. It’s a bunch of ad-hoc formats, typically incompatible between different programs. That’s why we have a variety of tools such as sed, cut, awk and alikes. Here is my hot take: these tools are not solutions, they are workarounds. When you don’t have a protocol to communicate between applications, you need a bunch of adapters. Like Sisyphus, one need to write these adapters. All the time. Forever. Text parsing and manipulation feels like core part of Unix. From my perspective it’s an accidental complexity.

On a philosophical note: the “universal interface” should have been a stream of bytes or maybe even bits. I guess it was not found to be very useful. Apparently if you add a line separator character it is good enough to become a recommendation. But why stop there? Maybe add more structure? Maybe accommodate the fact that most of the data is either records with named fields or tables with named columns? Are you sure you counted the columns right for your awk '{print $8}' ?

This is in contrast to HTTP which is spoken by everyone on the Web.

Some Hope

Newer CLIs do usually have an option to output JSON or (less prevalent) YAML. They are forming a new ecosystem with different set of tools. From my perspective, it is proving the point that the “universal interface” might not be that universal and not as productive as envisioned (should I dare and say “unacceptable”?) .

Should it be the half-way structured, aka semi-structured JSON? Is it the sweet spot? I mean why stop here? Maybe we need something with schema? Let me know what you think.

One of the notable projects, jc, is an adapter between the “universal interface” and something that you would actually like to work with.

Shameless Plug

If only we could have a shell that could play a role in this new ecosystem… or maybe even push it a bit in the direction of having semantically meaningful objects on the screen so that interaction would be possible…

Yes, I am aware of other projects solving the same issue. While we mostly agree on the problem, I haven’t yet seen a project which sees the solution the same way as I do.


Happy, DevOps-ing!



Read the whole story
rosskarchner
24 days ago
reply
DC-ish
Share this story
Delete

Privacy-focused Linux Distributions to Secure Your Online Presence in 2021

1 Share
Privacy-focused Linux Distributions to Secure Your Online Presence in 2021

Linux distros are usually more secure than their Windows and Mac counterparts. Linux Operating Systems being open-source leaves very less scope of unauthorized access to its core. However, with the advancement of technologies, incidents of attacks are not rare.

Are you in a fix with the coming reports of Linux systems targeted malware attacks? Worried about your online presence? Then maybe it’s time to go for a secure, privacy-focused Linux distro. This article presents a guide to 3 privacy-oriented Linux distributions that respect your privacy online.

Why You Need a Privacy-focused Linux Distro

But before jumping into that, let’s have a brief overview regarding the importance of a secure Linux Operating System. You may know that the Operating System is the core software of your computer. It helps maintain communication across all the hardware, software, memory, and processor of the system. It also manages the hardware parts.

If your computer isn’t secure enough to use, then hackers can get easy access to the OS and can exploit it to view your files and track your presence on the internet. Privacy-focused Linux distributions offer a lot of good choices packed with the most reliable features to select from.

5 Privacy-focused Linux Distributions

Now let’s take a look at the most privacy-focused Linux distros that allow staying secure.

Septor Linux

Septor Linux is an OS created by the project called Serbian Linux. Serbian Linux also produces Serbian language-based general general-purpose Linux distribution. Septor implements the KDE Plasma desktop environment and is a newcomer among all other distros.

The Septor operating system offers a stable and reliable user experience. It’s suitable for a vast range of computers because it is built upon Debian GNU/Linux. So, a solid privacy level is what you can expect. The distro routes all of the internet traffic through Tor network to earn privacy credentials. The distro used to use a launcher script to pick up the latest Tor, however, now Tor comes in bundles with it by default.

Read the whole story
rosskarchner
29 days ago
reply
DC-ish
Share this story
Delete

visurf, a web browser based on NetSurf

1 Share

I’ve started a new side project that I would like to share with you: visurf. visurf, or nsvi, is a NetSurf frontend which provides vi-inspired key bindings and a lightweight Wayland UI with few dependencies. It’s still a work-in-progress, and is not ready for general use yet. I’m letting you know about it today in case you find it interesting and want to help.

NetSurf is a project which has been on my radar for some time. It is a small web browser engine, developed in C independently of the lineage of WebKit and Gecko which defines the modern web today. It mostly supports HTML4 and CSS2, plus only a small amount of HTML5 and CSS3. Its JavaScript support, while present, is very limited. Given the epidemic of complexity in the modern web, I am pleased by the idea of a small browser, more limited in scope, which perhaps requires the cooperation of like-minded websites to support a pleasant experience.

I was a qutebrowser user for a long time, and I think it’s a great project given the constraints that it’s working in — namely, the modern web. But I reject the modern web, and qute is just as much a behemoth of complexity as the rest of its lot. Due to stability issues, I finally ended up abandoning it for Firefox several months ago.

The UI paradigm of qutebrowser’s modal interface, inspired by vi, is quite nice. I tried to use Tridactyl, but it’s a fundamentally crippled experience due to the limitations of Web Extensions on Firefox. Firefox has more problems besides — it may be somewhat more stable, but it’s ultimately still an obscenely complex, monsterous codebase, owned by an organization which cares less and less about my needs with each passing day. A new solution is called for.

Here’s where visurf comes in. Here’s a video of it in action:

I hope that this project will achieve these goals:

  1. Create a nice new web browser
  2. Drive interest in the development of NetSurf
  3. Encourage more websites to build with scope-constrained browsers in mind

The first goal will involve fleshing out this web browser, and I could use your help. Please join #netsurf on irc.libera.chat, browse the issue tracker, and send patches if you are able. Some features I have in mind for the future are things like interactive link selection, a built-in readability mode to simplify the HTML of articles around the web, and automatic redirects to take advantage of tools like Nitter. However, there’s also more fundamental features to do, like clipboard support, command completion, even key repeat. There is much to do.

I also want to get people interested in improving NetSurf. I don’t want to see it become a “modern” web browser, and frankly I think that’s not even possible, but I would be pleased to see more people helping to improve its existing features, and expand them to include a reasonable subset of the modern web. I would also like to add Gemini support. I don’t know if visurf will ever be taken upstream, but I have been keeping in touch with the NetSurf team while working on it and if they’re interested it would be easy to see that through. Regardless, any improvements to visurf or to NetSurf will also improve the other.

To support the third goal, I plan on overhauling sourcehut’s frontend1, and in the course of that work we will be building a new HTML+CSS framework (think Bootstrap) which treats smaller browsers like NetSurf as a first-class target. The goal for this effort will be to provide a framework that allows for conservative use of newer browser features, with suitable fallbacks, with enough room for each website to express its own personality in a manner which is beautiful and useful on all manner of web browsers.


  1. Same interface, better code. ↩︎

Read the whole story
rosskarchner
40 days ago
reply
DC-ish
Share this story
Delete

Ship / Show / Ask: A modern branching strategy

2 Shares

I've written a fair bit about how using pull requests can encourage a low integration frequency, increasing cycle time and discouraging refactoring. Rouan Wilsenach has had success using an approach that categorizes changes as Ship/Show/Ask - using this classification to decide whether and how to use pull requests.

more…

Read the whole story
rosskarchner
42 days ago
reply
DC-ish
Share this story
Delete
Next Page of Stories