Firstly…
Merry Christmas !!! ^-^
I hope you all have a wonderful christmas, and sorry for not posting in a while. ;-;
So to initiate this post, lets have a thought experiment. Imagine yourself a part of club or project of any variety your heart desires in which you and the member particapants are are all salient contributors and users of the paticular hobby or subject. All people generally follow the informal guidelines as most people are familiar with one another on a personal basis. Now, say this club/project garners enough attention for those that are not directly friends with its founders join. These new comers will generally accept the state of the project for what it is and engage with it in the same way as those there before them. Where this takes a turn is when people join on a false basis. Where in this case a false basis is simply not for the associated project itself, say a friend or significant other who is bored and follows an existing new member to the original congregation.
Aside from the fact that someone has made the cognitive blunder of taking part in something they don’t value and or like, such an individual may then take issue with the organisation or methodology of the “club”, due to their negative predisposition. Now due to the cultural zeitgeist of inclusivity these individuals have the means to berate these methods as being discriminatory. Say I am a member of a cryptography/combinatorics club where members enjoy sharing and solving puzzles of that variety and as an introduction into the group one would be presented with such a puzzle. This being a litmus test to see what techniques someone might employ, which also allows for the segway into a general conversation of the clubs premise. Cryptographic puzzles!
Now this new person who has no interest might take issue with this as they still have the underlying wish to converse with their prexisting acquaintences about whatever existing topic. A usual common denomiator being something like pop culture or interpersonal relationships. But they don’t wish to do the pesky test which stops them from doing so, and as such declare it as gatekeeping and invoke something like a public vote against such a practise. Due to the small number of participants direct democracy is quite reasonable and due to the aforementioned zeitgeist will likely yield to the benefit of the “new person”.
This negation of the core tenants of the original club to be instead subjected to instead be replaced with the discussion of whatever may be the lowest common denominator of any individual, bare in mind these people may have their own clubs in which they discuss perhaps music or train model sets but don’t have an interest in cryptography would then have to instead rely on talking about said common topics.
The main questions of this post are then raised, is the pareto effect (here being used to measure the interest in and contribution to the original subject of the club) definitionally thwarted by democratic processes? And, is the gatekeeping of communities a viable method to prevent the dissemination of a community.
This is basically a rehashing of the tyranny of the commons a list of resources for which are in the footer. However its case is often overlooked especially on the internet with its ephemeral nature. Subjects of this issue are practically everywhere, the following of which it pertains to mostly painfully (for me):
These may seem especially broad, especially the last two, however with the fact that they are discussed in other places, makes the case that it is probable.
As a rant to one in paticular, there has been the forever boding promise of ‘The year of the Linux desktop’ that has spurred the hands of many into actions towards the creation of software for the platform. In the hopes that there may exist a world in which it is the predominantly used operating system of home computers. However, this aim insidiously nessecitates the simplification of the interface at the expense of complexity in the creation of the software, making those familiar with the systems at odds with being able to use it as productively as before and the new users never being able to use it in such a manner. Thereby neutering what makes the system useful in any measure. This already happened with windows and the various darwin iterations, and anyone who is mildly literate in the maintenance of an operating system can say how awful they are for anything more than the simplest of tasks.
Tocqueville, A. (1835). “Democracy in America.”
Ostrom, E. (1990). “Governing the Commons.”
Freeman, J. (1972). “The Tyranny of Structurelessness.”
Opinions on phones these days feels like dream recountings. I will simply never care why someone prefers android vs ios. There is nothing anyone will ever say that would surprise me. Oh you like FaceTiming your dog? Wow… Oh you want to be able to make your homescreen look like the result of my 7th grade html class? Very cool!
I used to know a whole lot about phones. Every release from every major manufacturer, I would watch the MKBHD video, the LTT video if they did one, and I would brag to myself (I think this is called pride) that I could recognize any phone by its rear face. It wasn’t that hard a few years ago because there was less smartphones in use then. I think because post-2017 phones are actually good with decent cpus that means that there are more phones in circulation.
This is to say that I really enjoyed knowing about smartphones. I don’t enjoy consumer electronics content as much anymore. Knowing things is not a hobby, much less if the subject knowledge only exists to answer the question “What X should I buy?”. Headphone nerds you’re on watch too, by the way. Y’all only get away with it because to know about headphones you have to know about audio and to know about audio you need to know some kind of cool physics, electronics, and (probably minor) biology.
Since 2018, I’ve been whole hog into iOS. I don’t care about the ecosystem. My iCloud has been broken since 2021 because it says “not enough storage to backup” regardless of how little I ask it to back up. In fact I don’t think the ecosystem exists. I have a couple categories of apple ecosystem features:
I also have the watch, which to be fair has some kind of argument for being a good ecosystem moment. When I set an alarm or get a notification, my watch vibrates in tandem. This is a big deal to me because I snooze a lot and I don’t like torturing my beautiful wife from her much needed rest. This is the only ecosystem feature that gives me pause for ditching my iPhone. Are there good watches that pair to androids? I remember LTT’s Linus would wear an apple watch without an iPhone in protest because although he hates, iPhones he hated the fact that there wasn’t an equivalent watch for android.
The reason I’m taking pause and looking left and right for a reason to switch to android is because of my newfound resentment for non-free software. It’s really, really bad actually.
I’m an AV integration software developer by trade. This means I’m working with devices like microphones,
cameras, displays, video matrix switchers, a bunch of random stuff from random manufacturers, some you
have heard of and some you haven’t. Each of these devices typically have an ascii protocol to talk to it
and that’s fine. Every manufacturer will make a different protocol for the various commands you can give
it, and that’s fine too. Standardizing the commands you can give to devices would be like standardizing
the food given to animals at the zoo. And some of them are tricky, sometimes there are variables you have
to embed within a command, typically just an identifier like 0x41
(ascii for 1), but other times it’
a bit worse like having a BCC where you XOR every byte in the message bytes, but not the header bytes
because those are always the same and not the footer bytes because that’s where the BCC goes! Like okay
sure, but with one device there wasn’t even an ID which means that every possible command and its corresponding
BCC is known ahead of time, so why not just include it in your docs?
Anyways, people have noticed this problem and make libraries for talking to devices. In this instance,
there is a proprietary programming ecosystem (Crestron’s Simpl Windows/+/#). The language itself is proprietary
and written 25 years ago, but that’s why they have Simpl# which is “just” a C# library. And what you might
do is use that C# library and compile it for use in Simpl Windows for the worse more traditional AV
programmers. The problem is, when people ship this they only distribute the compiled C#, no source.
This means that after a certain point, it’s locked and you can’t look into the library you’re using. Reminder the only thing this library is doing is building a string.
This was a nexus event because I am sitting there, on site, with the customer hovering over my shoulder as I’m trying to get a device to be sent the correct string command from the processor, and it just won’t send. I go into the module, goto definition a couple times, pull out a handy ctrl+f, and bam! Compiled C# library, do not pass go, do not collect $200. I’m sitting there dumbfounded why the function call isn’t doing what I’m expecting and I have to sit there and say “I don’t know and they won’t tell me”. It was kind of humiliating, like I’m the horrible programmer that introduced a mismatched state condition that makes the function not work. Maybe they pessimistically made it to not send unless some other condition is met? (I love OOP by the way)
I resolved it by grabbing the API docs and sending the string myself. This begs the question why was I using a library if the solution was so simple, but this is a post about phones!
I want to goto definition until I am at the top of the tree. This obviously doesn’t apply to operating systems as much because you have to be using the post-compiled version, but it still stands. I will never understand the inner workings of the linux kernel or much of the coreutils to have this aspiration pay off. But by golly it is worth pursuing!
Android is a soup of poor technical decisions (JVM) made 15 years ago and a dozen manufacturers gluing their proprietary nonsense on top. It is not an open source platform and it infuriates me when people think it is. If you’re using any major vendor, it is just as closed as iOS. And good luck disabling google play services. It’s just not tenable. If Google had GPL’d Android way back when we would be living in the space age by now, but in our reality it is just horrible.
Spotify and YouTube’s APKs are closed, I get that and I’m not against it. It’s a natural tension of commercial interests vs free software. But the OS being as technically and spiritually corrupted as it is, it’s just not an option for the serious person using their phone for serious work.
I want to be convinced that I’m wrong about android. I want to get a 10 year phone that runs a properly open source operating system, even if its UX is a bit worse than iOS. And to be completely real with you, I started this questioning phase when iOS 18 rolled out and completely nuked the Control Center [1]. Everyone complains about iOS 18’s Photos app, but the real victim is Control Center. A bastion of system controls that should be replicated everywhere has now been neutered by apple’s desire to appeal to the worst and most annoying criticism Android users (not iOS users!!) have been making: the lack of customization.
I liked how iOS tells you to go fuck yourself when you want to make gaps in your home page. I liked how custom app icons are up to the app developer to give out. I liked how control center was there to say “these are your system controls, changing audio source and wifi and such, and if you want some extra buttons like booting up Shazam or a timer shortcut, you can put it there and no where else”. This consistency is something to admire. It’s a restriction that allows less neurons to fire when you need a simple task done Anyone use uses vim bindings in their editor will understand what this means in practice. The buttons are there! Use it! If you want the button some place else, why? What value does this provide? It provides none and the detriment of adding buttons for the sake of it is tangible.
In my line of work, I set up a tablet for a user of the AV equipment to control the video/USB routing, camera controls, things like that. Imagine you’re a professor in a large auditorium and there are 6 displays What goes on them? What “mode” is the room in? What microphones are live? Some things we can assume with the 90% use-case of the room and some things we need input from the user to dictate what to do.
One of the most common requests frontend programmers will get is to add a button for X use case. Sure, you can do some action with five buttons but could you collapse that into one button? Please, I promise I just need a shortcut for the common use-case and there aren’t others that I haven’t considered. And it’s a matter of judgement if this is actually better. Much of my time in design is spent making the system intuitive. Many times these systems will be used by someone who is using it for the first time, and over the course of a decade, statistically, someone will use the system without being trained and without someone to help. This means that buttons can’t be scary. A button should not look as if it’s if it could be a destructive action without proper counterweights to prevent perceived destruction. Is there a modal that will pop up to prevent me from breaking everything? And if so, do I know that beforehand? This is mostly a solved problem in the web world and I’m excited to bring these insights when I’m building custom software for a client because they aren’t used to it.
But sometimes, a frontend programmer has to say no, you’re wrong and that the fact you have to do it my way is important. The only one of these new iOS features I think is worthwhile is the fact that you can take a photo as your lock screen and blur the background to a custom color. I use this feature to have a picture of my beautiful wife holding a jar of pickles as my lock screen while omitting the fact it was taken at Walmart.
This opinion of mine extends to desktop, by the way. For consistency’s sake I will specify that I don’t like how configurable linux is. I use dwm for my desktop environment (which means I don’t really have a DE, just a window manager), but only with much begrudging and using a slightly modified config.h by Natalie The customization on linux for me is really just adding features that should exist, like having dwm show the time with dwmblocks. If Windows had good UX like iOS does, I would be in this exact same dilemma but thankfully it’s shit all around so I don’t have to write a blog post about it.
I m asking for FOSS android options because I’m interested, though it’s a little bad faith to say that I want to switch. There is a list of features that I would require before switching that is probably not feasible.
First apple doesn’t track you like google play services does. One company is in the phone business and the other is in the advertising business. Natalie and I go back and forth on this, and I will absolutely admit that there is telemetry in iOS and perhaps some of it is personally identifiable. But with anything that google touches, that’s literally all they do. All of its data that is useful for its advertising business will always be personally identifiable. How could it not? It is always cringe to compare giga tech giant vs other giga tech giant like “my monopolistic company is better than yours!”, but in the issue of privacy there just IS a winner. And you can’t downgrade to a clamshell because the Chinese government is in the telcom networks. There has to be some degree of trust in the current market. There is the option of only giving your data to networks of open source or p2p softwares but that’s like the transition from twitter to blue sky. One is objectively better for the planet but it only works if the people you want to be on there, are.
Second, there are just so many UX wins on iOS that have nothing to do with ecosystem. I don’t ever have to worry about drain. I don’t have to worry about the 24gb of ram in my phone existing. I don’t have to worry about apps running in the background. In fact, I don’t ever have to close apps from the switcher thing. I’m sure this is a solved problem on android, but if so why do they require 6000mAh batteries to get 1.5 days of life? Why do they require 16gb of ram? Why do qualcomm cpus lag behind apple’s by 2-4 years? Why, in 2017 with the release of the iPhone X, did android copy iOS’s home bar in the worst way possible? Why does nothing feel consistent? Does android have good password management yet? If so that’s actually a huge win but I switched to an S21 when it released and my lastpass had to do an popup to insert passwords. As in, a modal like Facebook Messenger (another horrible UX disaster that people love for some reason). On iOS, it’s a feature of the keyboard.
Oh, that reminds me, a few years ago apple conceded and allowed custom keyboards on the platform. Have you ever seen someone use a non-default keyboard on iOS? No, because they are all just worse. There isn’t any magic to it, there’s no API that apple’s has that the others don’t. They are just worse hands down and yet another example of customization just being a pursuit of openness with zero benefit. I get apple’s attitude change, and I get why they want to give these little breadcrumbs so people tell their android friends “actually we can customize stuff”. The problem is that it will always fall short because until it’s something significant, like what android calls the launcher, or the notification screen, core parts of the frontend, it will always be less than on android and therefore fall flat to the imaginary person who has a problem.
I haven’t mentioned encrypted texting because of course there is RCS. But… oh, what’s this? End to end encryption isn’t a part of the RCS standard? The thing that every other messaging client, even Facebook Messenger, has figured out? I hate how the details don’t matter anymore. The RCS discourse has concluded now that Apple supports it (something they did begrudgingly, potentially because of the Chinese government, and way too late), but while that was going on it was extremely annoying that the assumption was that it was as secure as iMessage/WhatsApp/Signal. It does have TLS as a part of the spec, which is a start to prevent the Chinese from reading those texts. Why can’t the opposition get themselves together? Why can’t OSS and standards based solutions be as good as the closed ones? Oh no don’t tell me that the root cause is actually competition under capitalism…
Thanks for reading, this post was brought to you by a five hour flight in the middle seat by the worst airline I have ever flown. Natalie, I love you and I’ll be home in an hour or two (though I will have to post this after I get home… wait a second how to do I get this long of a note off of my phone and on my computer… why doesn’t iOS have a decent filesystem…)
[1] After writing this, I went to edit control center to get rid of the extra two pages they shipped by default. One was for your connections and the other was for media. These provide zero value because there is no difference between long pressing the wifi symbol to show that page, and the same for media. And it is worse because with multiple pages, swiping down means you go to the next page instead of closing the control center. Removing these erroneous pages brought back the close-by-swiping-up functionality that I have 6 years of muscle memory for. With the pages, you have to tap the outer bounds of the control center to close it. Just horrible. If I had thought to remove them before making this post I might not have made it, so thanks apple for having a major UX flub that inspired this post.
Image credit: https://amansinghblog.wordpress.com/2015/02/19/bad-user-experience-week-5/
This has been a big year for me. I got a new job, new place, and my beautiful wife is here in the US.
A part of this new job is making UIs in a WYSIWYG editor that was made 20 years ago with icon packs from 5 years ago. I think it compiles down to some kind of HTML but it’s impossible to tell because it’s all propietary garbage. It’s really difficult to make functional UIs in this because there’s no JS to control the behavior of the components - actually there’s not components at all. I might do an entire post about surviving Crestron’s VTPro, but that will be on my future A/V programming series.
I want to ramble and ramble about Crestron stuff but either a) my thoughts aren’t coherent enough to be in a well written blog post (I hold myself to a high standard!) or b) my thoughts are too well put together such that it’s a Code Tutorial and holy moly I don’t want to write another one of those it was really hard and I didn’t like it at all. Code stuff belongs in videos or books.
This post is just here to say hi. I haven’t felt like working on the site (go rewrite when??), and I haven’t had the time to do a full blog post. I’ve had a few ideas though, and I’ll try to push forward to actually write it. All of my ideas have been lost to me not writing them down…
I’d also like to have more specific posts. I kind of drift about sometimes or split the post in two. Seems like bad form.
Again, just saying hi (oh! and that I love my wife)
Have fun,
nathan
:)
If you wish to just have it without any knowladge (you will probably need to configure this), the full command I personally use is here. This is also only applicable to users of linux and alsa. Though hopefully the demonstration of some of the concepts here will make it a resource for greater applicability.
arecord -D default -f S24_3LE -c 2 -r 48000 - | \
ffmpeg -c:a pcm_s24le -i - -af anlmdn=s=4 -c:a pcm_s32le -f wav - | \
ffmpeg -hwaccel cuda -hwaccel_output_format cuda -f x11grab -s 1920x1080 -r 60 -i :0.0 \
-f v4l2 -input_format mjpeg -s 640x480 -c:v mjpeg_cuvid -i /dev/video0 \
-i - -f alsa -i looprec \
-filter_complex "[0:v][1:v]overlay=main_w-overlay_w-10:main_h-overlay_h-10[v];[2:a][3:a]amerge[a]" -map "[v]" -map "[a]" \
-c:v h264_nvenc -profile:v high -tune ll -preset p7 -b:v 6M -bufsize 3M -g 240 -c:a aac -b:a 128k -ar 44100 \
-f flv "rtmp://live.twitch.tv/app/live_xxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxx"
To make some sense of this I will start with the inputs. There are four principal ones which you will need to adjust as needed.
-f x11grab -s 1920x1080 -r 60 -i :0.0
If you don’t use an X11 implementation like xorg, you can try using kms grab.
-f v4l2 -input_format mjpeg -s 640x480 -c:v mjpeg_cuvid -i /dev/video0
To check what formats and resolutions your camera is capable of outputting on your computer use
v4l2-ctl --list-devices
and use the first option it gives you, in my case:
[relue:~]> v4l2-ctl --list-devices
HD Web Camera: HD Web Camera (usb-0000:07:00.4-1.2):
/dev/video4
/dev/video5
/dev/media2
USB2.0 HD UVC WebCam: USB2.0 HD (usb-0000:08:00.0-1):
/dev/video0
/dev/video1
/dev/video2
/dev/video3
/dev/media0
/dev/media1
I would wish to use either /dev/video4 or /dev/video0 depending on my choice of camera. To then determine the available resolutions run
v4l2-ctl --device=/dev/video0 --list-formats-ext
I would recommend using a hardware accelerated decoder for the camera as cameras can dump a lot of raw yuv data which can overwhelm your cpu and reduce your framerates. Run
ffmpeg -codecs | grep [v4l2/$format]
You will be looking for the decoders sectioni which would look something like this:
DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_v4l2m2m h264_cuvid ) (encoders: libx264 libx264rgb h264_nvenc h264_v4l2m2m h264_vaapi nvenc nvenc_h264 )
If you do not have an nvidia graphics card use the v4l2m2m decoders if they are packaged by your ffmpeg version. If you are ever missing anything from ffmpeg or your package maintainers are bad, there is the following github which allows you to create a statically linked binary in a chrooted enviroment with a neat little script.
arecord -D default -f S24_3LE -c 2 -r 48000 - | \
ffmpeg -c:a pcm_s24le -i - -af anlmdn=s=4 -c:a pcm_s32le -f wav - | \
ffmpeg ... -i - ...
As ffmepg only supports formats that have a 8/16/32/64 bit depth. I need to use arecord to initally record it, it also allows me to use a simple filter ‘-af anlmdn=s=4’ for noise suppression specifically on the microphone and then upsample (I’m actually reencoding, feel free to also downsample to 16bit at your discretion) to a 32bit depth. ‘default’ here is my default pcm stream from my alsa config, for a reminder of which I have an article also explainaing that here. But it is asymmetric dsnoop wrapper on my mic.
-f alsa -i looprec
Again refer to my article above on this as this is quite in depth.
That should be all for your personal configuring, this should now work on most major streaming platforms like twitch and youtube. Youtube has support for additional codecs like hevc and av1, and for personal streaming endpoints that number is practically limitless for an easy to configure media streaming server I can recommend mediamtx.
Beyond that there are the filters:
-filter_complex "[0:v][1:v]overlay=main_w-overlay_w-10:main_h-overlay_h-10[v];[2:a][3:a]amerge[a]" \
-map "[v]" -map "[a]"
The first part delimited by a semicolon is the method I use for overlaying the camera over the screen recording. 10 here specifies the location to place the origin (bottom right corner) of the camera, I use 10 so that there is 10 pixels of padding but this can be freely ignored. The next is amerge, you could in theory have as many inputs here as you like simply add the stream identifiers before hand, like so.
[1:a][2:a][3:a][4:a]...amerge[a]
The name ‘[a]’ here is completely arbitrary. It is also not strictly required to specify the ‘:a/v’ in the input identifiers if they are simply a single audio stream, you could if there were multiple use [3:v:2] to specify the second video stream in the third input stream. When using filter graphs it is important to make their outputs connected with the final output which is the function of the maps.
-map "[v]" -map "[a]"
These specify the names of the new streams we made in the filter graph and map them to the output.
The following encoding options should look very familiar to those that use ffmpeg and can be found with little issue on the internet.
-c:v h264_nvenc -profile:v high -tune ll -preset p7 -b:v 6M -bufsize 3M -g 240 -c:a aac -b:a 128k -ar 44100
However, something which can be useful here when tuning this for live streaming specifically is to add a buffer using -bufsize, but also a constant bit rate allows for more consistant latentcy. If you aren’t sure of the available options for a codec use ‘ffmpeg -h encoder=h264_nvenc’ in this case.
The last part is the output:
-f flv "rtmp://live.twitch.tv/app/live_xxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxx"
flv, is quite a restrictive container format and only allows h264 with aac, and only a single video and audio stream (with no subtitles either).
This is slightly improved using the -rtmp_live 1
flag which is introduced in ffmpeg version 6.0, and allows for more codec compatability, but still only single streams. For better support if using your own server, look into using rtsp or webrtc.
Some more boiler plate for hls, rtsp and srt are below:
f hls -hls_time 10 -hls_list_size 4 -hls_flags delete_segments -hls_segment_filename "segment_%v_%03d.ts" -hls_base_url "http://tv.reluekiss.com:8888" "https://user@password@tv.reluekiss.com:8888/mystream"
-f rtsp "rtsp://user@password@tv.reluekiss.com:1735/mystream/mystream"
#-f mpegts srt://tv.reluekiss.com:1735?streamid=mystream:mystream:user:password&pkt_size=1316
That is all x.
Hey sweetie, you’re on the way here and I am very excited.
I hope you had a safe flight. I’m sitting in the international arrivals section of the airport with a ton of lovely people cheering on their loved ones as they finally see them go through the big double doors.
Airports are work, no one would ever want to hang out here, but we show up here anyways. We’re looking forward to seeing our family, coworkers, and not for the $6 bag of chips. How could it be anything else? Of course they have to be designed for brutal efficiency (though not land efficient). They try to makr the stay here acceptable, but no matter what it’s going to be work (especially for the traveller)
Airports are diverse. Every group of people use them and that’s apparent no where more than the international arrivals section. Every time I’m at the airport I am reminded that there is a very large number of people who use them every day whether I’m thinking about airports or not.
The screen tells me your flight is going through customs, and my eyes are wide open waiting for you to walk through the double doors.
I love you sweetie, I hope your stay here becomes to what we’ve imagined it to be.
This is a really long blog post about me learning web stuff in Go. If you’re not also learning web stuff in Go, this probably isn’t going to be interesting. If you are though, please leave a comment. :)
Once again, I wanted to make a simple document and I turned it into a blog post. This was meant to be a message in the readme of bear-hub, but it got too long and I don’t think a readme is the right place for a story, so I’m going to finish writing it here.
Also, if you’re looking for the website in question, it’s not hosted yet. I’ll put in instructions in the readme to run the server/develop it yourself if you’d like. In the future I’m going to put it on something like bear-hub.reluekiss.com.
Doing this made me think “what was the other post that I rambled so hard I turned it into a blog post?” and it was the post where I wanted to start making videos. Going through my posts, I saw my Abstractions Essay, which is about this exact project! I totally forgot I wrote that, so rereading it was a treat. Let’s compare four months later, six months into the project. I’ve learned a whole lot and have basically been doing this one thing the whole time. I’m okay with putting this much time into one project because I’m learning! And it’s still interesting! And hopefully I can reuse the code for a potential rewrite of this very website! Even if all of the text of the code goes in the garbage, the logic and what’s necessary to do a proper website is something I have absolutely loved exploring.
When I made bear-hub, originally called no-magic-stack, I wrote a list of goals. A roadmap. A list of things to aspire to. Well, after I made it I basically never looked at it again except to update what the project was doing and how to set it up. I’m going to rewrite the readme to be what it is in its current state and point to this post if anyone wants more insight to the history of the project (this will 100% just be for myself in the future, no-user-gang rise up). I feel like this project stuck to those goals very closely, actually which is nice. But, goals should only be written down if you actually care about revisiting them to make sure you don’t go off course.
Auth has been such a pain point. Now I have the JWT claims passing through the middlewares r.Context()
correctly, so
I’m basically done except for all of the annoying parts. First there’s password resets, but then I want to do fancy
stuff like TOTP 2fa tokens, and after that oauth/passkeys, and then after that become an oauth provider. I think doing
that is the final boss. Maybe API keys after that? But that’s relatively harmless. I’m looking for ways to abstract
the auth package to the most reasonable degree. I want it to be as easy as the Auth.js/auth0 people, more on that later.
I absolutely hate how many articles and tutorials
there are online of software that does not pass the “am I, the author, willing to put this into a production service”
test. In with auth there should not be handwaves for this material. Either it’s
golden or it’s unusable, so tutorials that say “well in the real world you wouldn’t actually want to do this” makes their content useless.
Once I got up and running with htmx’s websocket extension, and some copypasta’d chatbot gorilla/websocket handlers, it was off to the races. Mostly everything since then has been frontend stuff. I originally had an issue with getting the form data to the server which took a couple days to figure out but after that I was smooth sailing. Recently, I made the chat interface look pretty because I thought that it wouldn’t be very encouraging to work on something that visually looks like dogwater even if it’s technically interesting. Eventually I’m going to have to figure out content moderation, which is more complicated than the current htmx websocket setup. Right now all it does is takes the div from the response and the response is an hx-swap-oob, so it just puts it in the dom. Say a moderator bans a user and deletes all of their chat messages. How does the client know to swap their messages with a “user has been banned” type of message? I think this requires some client-side js that I don’t have yet.
Postgres was a big pain point. I wanted to make it work so much, and it does work! Use it! Leverage your database to do work that is too annoying to do in the application. Want to auto generate a uuid? Easy! Want to do migrations? Supabase has a really great migration situation that I’m starting to miss already. I moved off of Postgres because I don’t want the database to do work for me. Supabase prides itself on It’s Just Postgres, might as well benefit from them doing all of the hard parts. But that, to me, seems like I’m learning a platform instead of learning a technology. Plus, with sqlite, it’s super duper local. Supabase has local development in the form of servers that live on docker containers that you can host on your machine, but I couldn’t ever get it to work without shutting off my internet. Anyways, all of this to say that I’m happy with sqlite because I want to do the heavy lifting myself, and now I get to learn the fun of hand-rolled migrations (or not, I haven’t gotten that far yet. I hear there are good tools for this).
What you’ll hear online is that htmx makes it really easy to send html from the server and render it in the correct place. This is true, but it’s not the whole story. When you have state in your application, you have to keep track of it somehow. What htmx really, really wants you to do is to move almost all of the logic to the server instead of the client. Whenever the client wants a page, you make a new state given the cookies, query params, headers, form submission fields, etc, and you render a new html document to send to them. Importantly, all of the things I just listed are browser/http primitives that are here to stay. You leverage what browsers do best so your server can do what it does best: work. Once I unlocked this idea of keeping having a ClientState/AuthState struct, and being able to generate it based on what the client gave, it really helped. You don’t need to negotiate with the client of what it knows. It knows nothing. I saw someone online say (paraphrasing) “A browser doesn’t need to know how a calendar works. It only needs to know how to render what the server gives it. The logic of the calendar is on the server, the representation of the calendar - that’s what goes to the client”. And I agree with this so much. The fact that javascript has gotten so good has been bad, because it means that the browser does more work than it was ever meant to. Stop making calendars on the browser, that’s not what they were made for. Keeping business logic away from the browser - that’s what htmx is good at.
Contrary to what I just said, I’ve thought about doing some SPA-level stuff with htmx. The idea is instead of doing
traditional routing where on a GET /profile/yogibear61
request, the Go server responds with a fully complete html document, what really
happens is I render, say, just the <body>
and htmx hoists the response into the body.
This way there’s not a full page reload with a potential flash of unstyled content.
The header, footer, etc, continues to exist, they aren’t destroyed, it’s just that the <body>
tag gets
replaced. This I think would help make the page feel faster and whatnot, but I’m not sure how legal that is in my mind
yet.
Quick note on something I said in my Abstractions Essay. Htmx does have some security implications, but only
if you have massive skill issues. You do have to think about sanitizing user input from html, because if you’re
telling the browser “render this div”, and that div contains a chat message that says <script>alert("pwned")</script>
,
and it actually runs, yeah you have some security implications. Just sanitize your inputs, on our blog site we do this
with the comments and it’s literally one function.
Sqlc is really, really good. Upon the transition from postgres to sqlite3, I tried to do the classic abstraction
thing where the application calls my db package, and that db package wraps both the sqlite3 and postgres
calls. The application doesn’t know whether it’s using postgres or sqlite3, it just gives the data to db,
and db figures out what to do with this. The problem otherwise is that postgres and sqlite3 have different types.
For example, postgres has a proper uuid type. This means that sqlc gives that field a uuid.UUID
type. But sqlite3
does not have such a type, it only has text
. So, my db package would wrap both of these calls and the application
would give whatever is most convenient (in this case, a uuid.New()
) and db would disperse this information to
what’s correct for the database. The problem with this is the synchronization and return types. Basically, the only
way to make it all work is if I made everything in postgres whatever sqlite3 can handle, and from which database
should I return? What if I was writing to them both at the same time? This issue is solvable, but as I was doing it
I realized it required an amount of effort that wasn’t worth it. If you’re curious, you can go
here
to figure out how I was trying to crack this egg. It is not pretty, but you’re going to be looking in src/db
Templ is such an insane bit. The tooling and compilation has only gotten better since I started using it. I have one merged documentation PR and another PR which I won’t link that is a rewrite of a static site generator someone wrote for templ.guide. I absolutely love the project. It brings React’s functional component model to Go. That’s it, that’s all you need to know. If you’ve ever tried to use Python’s Jinja or Go’s tmpl for html templating, it’s an insane breath of fresh air. You literally pass Go structs, slices, whatever you want and as long as it goes into a string before it gets rendered, you’re golden. The PR I linked was documentation on how to get tailwind’s LSP, html’s LSP, htmx’s LSP, and templ’s LSP all working within Neovim in .templ files. The fact that that works is insane, and it’s getting better and better. Templ doesn’t have Hooks, or RSCs, or any other real React primitives, so comparing the two is a little bit disingenuous. It’s not a React replacement, it’s an html templating replacement. If you’re someone who uses React for html templating, you are just simply doing it wrong. There’s no need to bring in all of React for this job, and that’s why templ and htmx work so well together. You have the templating you need, and the htmx to do the client side shenanigans. State? Like I said before, the client holds no state other than the tools that are built into http. useEffect? Wake up darli… what are you saying? Rendering lifecycles? Hydration errors? Do you know what year this is? You’re speaking nonsense.
Like I mentioned before, I want to make a package for auth in Go web apps. But it’s not easy. If you look throughout
the code that calls on the stuff from src/auth
, it’s not as simple as taking on auth from on high. For example, I have it
so the SignUp/SignIn struct has a method of signature RenderErrs() []string
so when I’m rendering the html, I pass that
struct into the templ templating, and within the template if there is anything within any of the errors, it will show them
in little boxes on the page. If SignUp.UsernameErr exists, it will make the border color of the username box red. How could
you make that into a package if you didn’t own or understand the contents of the struct? The jwt boilerplate I think is
production-ready, so that will probably be the first part in my auth series, but that one file alone probably isn’t worth
installing. Just copy it! If you look at it and you think it’s good, copy it now! I like it, and if you don’t then
please cut an issue because I want to get a second opinion. One cool gimmick I’m going to pull with my auth series is that
when I feel comfortable enough to share it, I’m going to pay real cash-money to pay a web security professional to do a code
tour and pentest on my auth system. I want to get real insight on what I personally overlooked, and what common mistakes are.
This step is important because the number one reason I hear online of why someone should never, ever roll their own auth is
because there are “so many” footguns you can find yourself in, you should really leave auth to people who dedicate their
entire company towards security. But, to me, if you own your auth, and it Just Works, then you have it forever. You can
microservice the heck out of it, and as long as you don’t have severe skill issues, it will still Just Work. Auth is a concept,
so if you have it done correctly then it will Just Work forever. That is, unless passwords really become out of fashion
and you have to move to passkeys, but I’ll have that covered to!
I’m more excited for this project that I was working on playlist-powertools, a webapp that at least had a potential of having more than one user, and I was newer at programming then! Learning concepts instead of frameworks has been a lot of fun. I do want to say, however, that the chat service is something I intend on making a real thing that people use. The tagline: “A self-hostable twitch chat clone”. I originally thought of making the chat backend be an irc server, but translating between irc and http (which I would need to do to avoid using a javascript irc client) seemed unnecessary. I’ll see where that part of the project goes.
Thanks for reading. I didn’t want this to be my longest blog post yet, and here we are. I get really excited with this stuff because now I have multiple avenues of motivation, and this website is one of them.
Watch this video of Bill Gates talk with David Letterman in 1995. Letterman wants to be sold on this whole “internet” business and, I believe, is earnestly listening. Without any data to back this up, I think he’s voicing the concerns of the unconvinced majority of Americans who constantly hear about how the internet is going to change the world, but can’t find a way to fit a computer into their daily life. Gates makes a joke about this, that Letterman “has too many assistants” for the internet to be useful. Media diet? Who is going to take someone seriously when they say “imagine all of the new media you’re going to be able to consume with this technology!” Has anyone at any point of time wanted more media to consume? That seems like a weird sell, even for someone in 1995. Productivity could be a proposition, but if you’re not working in something that a computer could obviously fill a void, the demand isn’t there yet.
This is where AI people find themselves now, and where crypto people were a few years ago (before everyone gave up trying to make a use-case that doesn’t exist). You simply cannot sell a dream. Bill Gates didn’t convince Letterman in that interview, and nor should he have. AI people will look at this conversation and think “How silly is David Letterman in this video! How could he not see the future of the internet? It was 1995! It was already there, just waiting for the likes of Amazon and Netflix to change the American economy forever!” In reality, though, that is the only reasonable opinion. If you don’t see a value add for a technology, it’s not on you to “understand” it more, it’s on the technology to make it relevant to you if they want your adoption.
AI people are stuck right now with this reality. And what’s worse is that they have very little to show for their fruits. ChatGPT is a great technology and has been incredibly useful for me personally learning all this programming stuff. If I were to add up the amount of time the chatbot has saved by giving me good analysis on specific concepts and tutorials, and subtract the amount of time I’ve wasted by investigating its hallucinations, it’s undoubtedly a net positive. My favorite thing is to have it explain “like I’m coming from Python”, or something to that effect, so its explanation is tailored to what I already know. This tool is not something that is easily replaced by reading the documentation because of this tailoring. I got so much use out of the chatbot that I paid the $20/month since GPT-4 came out up until a couple months ago when my skill issues subsided enough that I could deal with the free tier.
I bring that up to say that I’m not an AI doubter in the sense that my friend was, who, when she learned that GPT-4 got a passing grade on the BAR exam, said “So what, if I had access to the entire internet during a test I could pass it too”. This is the threshold I think some of the early AI people were trying to convince us passed. It’s not just that the thing can do X provided much of the information from the entire internet. It’s that.. holy moly! We just found a way to wrangle much of the knowledge from the internet in one program! I see a future where this is much more useful than it is now. Maybe the chatbot can gather insights into the information that we gave it that no human would look for. Maybe the machine learning black-box part actually makes it understand the underlying data instead of being “a very fancy auto-predict” (I hate this response the most). Unlocking this could be huge.
Crypto is the same way. I imagine crypto people 10 years ago were yelling at their Lettermans who would say “What do you mean I have this digital money? I already have real money that people actually care about. Even if places would take this digital money, the fact that it’s so volatile doesn’t make it a useful currency”. After 15 years, that argument basically won. After the NFT boom and bust, crypto is no where to be found except for places where you don’t want big man watching. Here’s the crypto sell I say to the Lettermans in my life: It’s not about money, or about randomly generated pictures of monkeys, it’s about the fact that we figured out how to have scarcity exist on the internet. Without crypto, any 1’s and 0’s you send can be infinitely replicated or transformed and there’s no sense of ownership except by the legal system. With crypto, there is now a concept of digital ownership. Is that useful? Could the title to your house be put on the blockchain to circumvent the banking system with their fees and monopoly on accreditation? We don’t know, we’ve looked far and wide and it hasn’t worked out to be useful enough for adoption in any public setting. But, it’s cool.
As I said, no one is trying to sell crypto anymore. The hype is gone, the refs have decided it’s a bust. No hope in the foreseeable future except if bitcoin hits 6 digits and the eighth round of scam artists come out of the woodworks. I don’t think there’s any way that AI will fulfill its lofty promises. OpenAI has one really good product. One product, a new internet does not make. We’ve gotten so used to platforms and opaque software that we seem to have forgotten that the internet was built on so much ridiculously open-source software it would make Sam Altman of “Open”AI’s head spin. The industry has forgotten this fact and it’s very sad. I have no problem with a company making software, not open-sourcing it, and making a butt load of money off of it. Pop off, make your bag, have fun. But if you’re trying to enter the next era of computing off of something as proprietary as Facebook or Fortnite, it’s not going to work.
(speaking of, if anyone has some insights on why open-source never made its way into gaming related software until things like Godot, please leave a comment).
Not only do these LLMs not have standards to be graded off of, you kind of can’t! They aren’t deterministic, it is impossible to get the same output with the same input. They are rushing to build a new internet and they have no idea what it is. They are trying to sell Letterman, but when Letterman asks “But what is it really?”, they give wild speculation that requires technology that doesn’t exist yet. “But we’ve come so far in just a few years, surely we’ll catch up to our self-imposed expectations!” Why set them in the first place? Why are you doing this to yourselves? I thought VC money was expensive these days. They rushing to push LLMs to Lettermans but this time, there is no value. There is an annoying pop-up on Google that brings down the results of your search with LLM garbage. Every time I’ve seen an “AI Overview” on Google, it is bare minimum worse than the first link. And how could it not? Automatically running a GPT-3.5-quality query on some significant number of Google searches would be so much compute. Maybe they have enough hardware, but then it’s an energy question. Those LLM queries aren’t cheap compared to a typical search. You might notice the lack of graphs and statistics, but I don’t see how I’m wrong. All I’ll say is that they either 100x the quality of their LLMs, or they will recoil in embarrassment, subject to clowning for the next decade on how awful this era of Google is.
Would Google search be better as a conversation? Again, no statistics, but some huge percentage of Google searches are not complicated enough for an LLM to be useful. If I want to order takeout, go to my movie theater’s website, go out and do anything planning, I Google the company. Most times, I could have just gone to “texasroadhouse.com”, but instead of taking the risk of going to a domain squatter who takes advantage of this naivete, I’ll Google it and click on the top link. And sometimes I’ll do this even for websites that I’ve visited a hundred times. A Google search is so cheap in my mind that it’s not a problem to do an extra two clicks for a guarantee of correctness. The Gates of the world might say “Well you’re used to using Google in that way because of its limited functionality, when it can do more you will do more with it”. And I agree! Like I said, I paid OpenAI $20/month for around a year, I’m well aware of the potential value add… for complicated questions that require a chat experience. These AI Overviews shouldn’t be in Google by default because most of the time I don’t want an overview - I want a resource. And there’s no amount of algorithm shenanigans that will be able to infer what I want. I don’t go to ChatGPT for the height of the Empire State because it’s not the right tool for the job. Google should have a separate area that doesn’t distract from links for this task. I imagine Google Bard is a good chatbot, I haven’t tried it out because I doubt its free tier is better than GPT-3.5, but that’s the appropiate area for a chatbot to exist. It’s own space. I’m not sure how Bing settled this. Did they distract from their links with AI stuff? Did it make a chatbot query and interrupted the results with whatever it said? I’m not sure, I never used Bing in a serious enough way, even after the initial hype, to remember.
The first question (put last in this rambly post) is “Do we want this in the most idealized form?” Not to be a Letterman, but I can’t understand how talking to a computer in natural language is going to be better than what we’ve devised in the last 40 years of user interface design. Obviously I won’t use a chatbot to interface with my computer, I like the least amount of things in between me and what I want my computer to do as possible. But take someone who wants to not manage their windows, not manage settings, their filesystem, they just want their computer to work. Maybe they use their computer in the bare-minimum case: content consumption. Would consuming content be better without using a mouse or keyboard? How would you correct errors from the AI? Talk to it again? What about when the AI understood what you said, but not what you meant? And mind you, these aren’t searches for internet content, this is basic manipulation of your computer. These keyboards are faster than speaking because they are buttons! Real deal, tactile, do-something-instantly buttons! Does anyone walk around with laptops where the keyboard is a touch screen like your phone is? No! Because that’s awful! Do people who write for a living do so with a voice-to-text prompt? No! Even if they had an editor by their side helping the computerized scribe correct errors, the writer would still be frustrated because they now have something in between what they want to do and what they are doing. It’s an unnecessary complication. Unless of course, it is necessary. Accessibility could be a big win for AI - from transcribing poorly written websites & software into something that you can understand, or by being exactly what Sam Altman wants everyone to do in talking to their computers in natural language. That would be a great win for these people! But no, we aren’t satisfied with an incredibly complicated LLM like GPT-4 that makes bajillion dollars a year, or for finally making most software accessible to people who have been overlooked by developers, or with giving people links to the resources they were asking for. We have to make a new internet, off of a platform that no one understands, no outsiders can contribute to, and without a faint definition of what data comprises these LLMs.
This was my AI rant, hopefully I have expressed enough opinions to either eat them in 5 years or be a vindicated gigabrain.
Footnote ramble:
A month and no post. Not the best. I’ve been busy! I have a new job situation that I’ve had to spend a lot more time at than I’ve been used to, I’ve been working on my side projects, and I’m just simply afraid getting out of my step with writing compounds itself. No more! A post, to get back into things.
A question: Why does everyone hate their neighbors?
It’s something I’ve noticed, where, in a large organization, people don’t like the other shifts when talking among their shift. And if they are talking to the other shift, well they don’t like the people that work in the other office. And if they’re talking to the people in the other office, then they don’t like the people in the other building. “Why would someone write this document this way? It’s horrendous!” a worker says to their roaring companions. Then, the next day, when the author is in the room to explain themselves, it’s all “Oh, right yeah I’m sure there was something about that”, apologies, and “no problem”s flying around. It’s insincere, and it makes me question whether my friends say the same thing about my when I change shifts, or offices, or buildings.
Hey Natalie, this was a lot of fun to write. Please don’t let me wait another month to write again :)
In the eternal quest for knowledge acquisition people will inevitably be led to programming and computer engineering. Due to the large part it plays in so many of our lives. As a person broadly interested in mathematics, the concepts in computer science are very intriguing for their application of many of the concepts I find familiar. There are many interesting languages out there and many more functional ones but for it’s predominance and teaching of core principals I landed on learning C. As a quick aside for those of a mathematical inclination the book Structure and Interpretation of computer programs or colloquially, ‘The wizard book’ gives good reasoning for a lot of common programming practises that often go unsaid. Aside from it’s ubiquity, I personally run quite a bit of software which is written in C and for my comprehension it would be very useful for me to learn more about it.
This is where I am currently posting my attempts to the questions from the C programming book by Kernighan and Ritchie as well as some leetcode problems. Not all problems will be in a complete state or well commented, though for future I will endeavour to do so. For the benefit of myself and others.
In the last couple decades there have been a number of languages vying to take control of the spot of ‘the main language in systems programming’ with varying success. The current largest contenders, and forgive me for any I miss, are go, rust, zig and C++. All of these have varying degrees of implementation but also share different paradigms. With go and zig being more similar to C in their simplicity. Though it is rust and C++ with more credit, the former having support added to the Linux kernel which is the premier bed for systems programming and C++ being the foremost used in video games. That second point may seem odd in a discussion in systems programming. However, it is an industry with a very large impact on common practises in programming.
However, even with all these new contenders C still remains the language used by computers as well as between different programming languages as a de facto ABI even with all of it’s problems.
Regardless of the ‘politics’ that surround the language it still teaches many of the core principals within computing with some parts such as manual memory management in the form of allocation and pointers being often removed in other languages for ease of use or to minimise problems that can arise from bad practises. As before mentioned, I use many programs written in C and as such interfacing with them and my system as a whole is encouraged when writing in C.
In any case, dear readers, I hope you all a very good morning, afternoon or evening. o/
I apolagise that there will be no hopf fibration in this post, but maybe in the future :3
Exercise 0.0.6 (a). Let be a subspace of consisting of the horizontal segment together with the vertical segments for a rational number in . Show that deformation retracts to any point in the segment , but not to any other point.
Solution. Let be the subspace of shown on the left of figure 1 consisting of horizontal segment together with the vertical segments . First note that the segment deformation retacts onto any point by way of the straight line homotopy which fixes and which gives asymptotic behaviour to the boundaries in the intervals and . can then be written as:
Notice that deformation retract onto which has a similat family of maps over the segments to the point continuously with respect to . Using the maps and we can create a composition , where
To show that fails to deformation to retract onto any point in except on the segment . Suppose that there does exist such an for which a deformation retraction exists. It follows that every neighborhood of for which the inclusion is null-homotopic.
To show that is path connected we do a case analysis. For and the result is obvious but for and , then the path is connected. And if and , then is also connected.
is clearly not path-connected. And neither would a neighbourhood of . can be thought of as a ball in which is disjoint from S, and intersecting an infinite number of line segments, all of which are disjoint. Thus and containing must also be path-disconnected. Hence
(b). Let be a subspace of show in the rightmost part of figure 1 consisting of a union of an infinite number of copies of arranged as above. Show that is contractible but does not deformation retract onto any point.
Solution.
(c). Let be the zigzag subspace of homeomorphic to indicated by the hheavier line. Show that there is a deformation retraction in the weak sense of onto .
Solution. Let have a deformation retraction to a point (the constant map) this in combination with the result from (b) which shows that is contractible to . Would contradict the result from (b) that is cannot deformation retract to a point. Therefore to show that the contraction of to is rather a (weak) deformation retraction we let an be the family of maps defined by:
Which maps the each where .
I may put my attempt to part b on here at some point but I got quite stuck so for now you can look at some other posts. These Hatcher ones take quite a bit more effort on my part to finish and even if they aren’t paticularly popular I hope someone can glean something useful from them. I looked ahead and this question didn’t seem too bad so I will just put this as an addendum.
Exercise 0.0.14. Given positive integers and satisfying , construct a cell structure on having 0-cells, 1-cells, and 2-cells.
Solution. We do induction on . Notice that is at least 1, and is at least 1 because is of dimension 2 for .
I never mentioned it in my earlier posts but Hatchers books are all freely available at their website, this is the link to his algebraic topology book which I am currently following.
I dislike the internet.
Before I elaborate I must explain that on the internet, especially its very easy to build a small garden for yourself which you very rarely venture out of. I try my best to go on internet adventures and new avenues (I promise they will be documented in links one day). Just in case I may come across a treasure. This thought process, too, has now unfortunately become rare as well. As many are used to having their consumption of media be regulated by others. TV and Radio were the original sin when it came to unabashed corporatism. But now I must say goodbye to any vestige of the internet when it was ran by your local nerd.
With the migration of more and more communities into walled gardens, it creates an inertia which is very difficult to repel, leaving those that would rather do away with them stuck regardless of any abuses conducted by the platform. Whether this be from work or school, friends, online or in person. Every tool is an obfuscated and proprietary gui on top of something that has existed for 20 years or more. For example a friend of mine who recently began exploring new software came across the age old question of, “how does one share files with one another??” These days I’m a firm believer in syncthing and ftp for such things. But said friend showed me a website which would detect others on your local network who were also on the site and establish an ftp server and client between them. Nothing extraordinary, but it makes you think. The spot we are in now is, of course, a culmination of many factors, and I would like to at least address a few pressing ones.
One is mobile phones - due to their small size, it makes it restrictively difficult to do anything serious on them. And hence something as seemingly simple as sending files on an ftp server now requires a website with a not insignificant amount of javascript, because for interactions premediated with touch it is the only logical avenue for design.
Another is that the tools for editing software and interacting with code a sorely missing from some of the biggest operating systems that people use. I won’t address phones in this case as that is already a lost cause, but in the desktop and laptop market, windows and macos dominate. However, neither have any built-in toolchains to actually interact with code. Both require you to download development enviroments, which means one must first access the internet before even being able to compile code, Which is completely backwards. This abstraction away from the core principles of computers is one of the biggest leading factors in tech illiteracy in my neither professional nor peer reviewed opinion.
The third is the internet, which I have touched on momentarily in both previous paragraphs. For a decentralised system it has become very much not so on every level: dns resolving and CA certificate authorities, an oligopoly. Messaging (check out the irc) and file sharing, an oligopoly. And for paticular forms of media, monopolies. But even so, this is almost definitely exacerbated by modern website design. Many are of the opinion that one should always account for the lowest common denominator, but if you continue to only ever account for the lowest common denominator, it teaches that people don’t need to learn and be in control of their devices. Which will mean they only ever become less literate and more reliant on the powers that be to save them from themselves. Which they have no incentive to do. If you were to look merely 20 to 30 years ago, one would need to be able to set up their modem to allow for access to the internet and then dial up your provider, and any technical firmware or driver issue would require a little bit of thought. Even these very small things would mean people had some sense of mastery over the devices they use and probably more common sense, rather than replacing the entire device at the first sense of trouble.
If I were to change things at the press of a button and repeal the decades of illiteracy and oligarchal control, I would do so. But until that point, try to engage with the internet with vigour, and embrace anonymity, decentralisation, technical literacy, non-commercialisation and freedom of speech (these are taken from here, who has some thoughts of their own on this very topic).
Learning can be a very hard, but is also a very rewarding process, and for a device I know you the reader use every day, you could do with knowing a little more about it.
This post is about my experience with dreams - I’ll try to keep my dream anecdotes to a minimum because let’s face it - other people’s dreams are not interesting. Towards the end I’ll put a website update (a lot of little things!)
Active dreaming was a superpower when I was a child. I could fly, manipulate things, talk to people, work things out, ignore my problems. It was a proper escape that I had a good chance of achieving in any given week or so. I remember a dream where my alarm was going off while I was in my old home in South Carolina. Desperate to not miss school, I figured out how to concentrate to just so slightly move my real-life shoulder and make it to the bus on time. This skill would prove immesurably useful for the rest of my life. The ability to exit a dream whenever I figured out that I was in one would save me from countless hours of nightmares and generally boring dreams.
Do you ever have boring dreams? Usually it happens in an instant - so quick that it’s very much possible that I was going to wake up anyways and saying “I’m bored” was my brain’s convinient excuse for waking up prematurely. I’ve woken up laughing at what just happened in my dreams. Sometimes my dream makes an argument at me. Like a meditation state where my psyche wanted me to consider something that an awake, sober brain would have never thought of. About twice a month, I get sleep paralysis. Data on this is a bit fudged because I deliberately stopped keeping track of my dreams, good or bad. There was a long time where the bad outweighed the good.
As a kid, I remember Vsauce and the rest of the science youtube people, as well as my friends at school talk about lucid dreaming. I never really perfected the on-demand lucid dream, but I had plenty of them without any effort - it seemed a waste of time to try. Although, remembering dreams was something I cared about. There was so much life in them, so I would write them down to remember them. You forget your dreams because your brain doesn’t find them worthwhile - well, if you write them down and make an exercise to remember them, your brain says “Oh, this is important to us, alright let’s not dump it 5 minutes after waking up”.
So, that’s what I did for awhile. I don’t remember how I kept track of them, but I remember making an effort to write them down in my phone or a notepad whenever an okay one rolled around.
Then, the nightmares started to creep up more and more. Or, not even nightmares, but a complicated drama that would make me wake up in a tissy. I didn’t like these so I made an effort to forget the dreams - no more logging them, no more analysis, no more stress. This has worked for the most part and is why I say that dreams are overrated. I’ve found that I sleep better when I wake up unencumbered by the story that unfolded during the night.
The paralysis part I don’t think changed that much, but it certainly helps when the scope is smaller. Most of my paralysis dreams are without any sight, just a voice and some explanation as to why I can’t move. I got these when I was a child too. Most traumatically, the narrator in my dream would lock my eyes onto some object - something that is innocent but telepathically tells me how evil it is, how it wants to do this and that. Obviously you don’t control yourself in most dreams, but to be told under no uncertain terms that you can’t move or look away from something, with an accompanying sense of doom, it’s just too much. There is probably a normal distribution of active dreaming, where in the middle is most people and they’re fine, and the people on the left that wonder what all the fuss is about. I’m convinced I’m on the right side of that distribution - more dreams and more dramatic dreams than most people.
I would happily go to the lesser-dream side of the spectrum any day of the week. Sleep is for sleep, and the best part about sleep is feeling well rested. Even my most fun dreams were only fun for less than an hour after I wake up. After that, I go about my day and life my life. Dreams don’t help me in any way, except for one.
The dream that had an argument at me was when I was deciding on how much my family meant to me. I hadn’t talked to my family in awhile, I had ongoing fights with some, contempt for others, and I was considering going cold turkey on them. This dream showed me a nice day with my extended family, and said “This is what you can lose if you don’t keep up with your relationships. You can’t expect everyone to hang by your side after you allow the relationships to wither away.” That’s something I’m grateful for, but boy is there so much bad stuff. I was almost late to work just the other day because I had four hours to sleep (skill issue on my part, for sure), and I had a nightmare in the middle of it. After a nightmare, I’ll just distract myself with some media until I’m no longer scared of the nightmare continuing, but it’s a bit of time and lost precious REM.
The paralysis dreams are, like I said, about twice a month, but they don’t bother me too much. Many of the paralysis dreams are fine? They don’t bother me, I just relax like a chinese finger trap and wake up. The more you freak out during one the more anxious you become, making the whole situation worse.
If you’re someone who has frequent nightmares, consider the fact that alcohol consumption makes nightmares much more common. Alcohol is awful for you in so many ways, and sleep quality is one of them. I don’t drink very often, and drunk dreams are iirc kinda fun for me, but maximizing dreams is the last thing in the world I care about.
When I get into bed, I just want some rest. No games, no flying, I want to wake up refreshed and ready to tackle the day.
The site now has a couple new pages, an updated (read: stolen from Natalie’s side) header, and the colors on the home page are cooler now. The best colors for the home page are grey and a different shade of grey, but doing that looks kinda boring so we’re not doing that. There is the irc page for showing you how to get to the IRC server, and the js page for telling you where in our site has JS and what disabling JS would mean for you. On the tv page, Natalie figured out how to get the HLS stream’s TLS certs to be properly signed, so there’s no question on the authenticity of the stream you see. The site is open source still, at the updated /gh/ tag in the header, so give a looksy and tell me what you think. We did a lot of refactoring today to make the layouts easier to work with and more Natalie-proof (love you sweetie). Also the Meta.astro was fixed so that when you link it in discord or twitter or whereever, it showes a nice thumbnail if it’s a non-blog post (it looks so cool! I’m really happy that Natalie did that it’s really cool looking and she’s really smart and I love her a lot a lot), as well as the title and description of what’s going on. I wanted to add a checkbox where if you clicked it, it would hide the content on the page so you could appreciate Natalie’s really great choice in backgrounds, but I couldn’t get it to work. Anyways, I hope you like our little site :)
With the adoption of AV1 encoding taking place in both youtube and potentially twitch. Two platforms with vastly different interests and performance metrics it brings into question of what video encoders one should use. To highlight, youtube has an interest in displaying as high a possible bitrate to compression ratio as possible whilst still adhering to good psnr (30+) and vmaf (90+) fidelity. Contrasting this with twitch who much prefer the usage of constant bitrates as opposed to variable due to wishing to minimise latency and package drops. Which can arise when there is a paticularly still frame with little bitrate, when set to vbr.
For the meet of it though we should discuss then, if AV1 is so good, why is this so. In the paper by Esakki et al. did an analyses of VVC, H.265, AV1 and VP9. With the main metrics being used, the ones I have already mentioned. My one gripe with the paper is the focus on video streaming. However as this is the most common method of video content delivery this is not entirely unsurprising. But it should be noted that some of these encoders perform significantly better then others are very low bitrates, namely VP9 and AV1 which won’t be as representative in this data.
Table 7(a) HEVC 1920x1080p BD-PSNR
| | Bitrate Savings relative to |
| :-----: | :-----: | :-----: | :-----: |
| | SVT-AV1 | X265 | VP9 |
| VVC | 49.8% | 67% | 75.8% |
| SVT-AV1 | - | 32.6% | 51% |
| X265 | - | - | 27.6% |
Table 7(b) HEVC 1920x1080p BD-VMAF
| | Bitrate Savings relative to |
| :-----: | :-----: | :-----: | :-----: |
| | SVT-AV1 | X265 | VP9 |
| VVC | 54.2% | 59.8% | 67.8% |
| SVT-AV1 | - | 13.73% | 26.77% |
| X265 | - | - | 17.84% |
For those with a more visual inclanation I have also included a graph from the paper which shows basically the same information in the second image.
I only highlight the 1080p recordings here as it’s the most commonly used pixel resolution as well as enough of a representation of the trend in the data to understand that VVC there is a fairly clear heirachy in quality preservation (fixed VMAF and PSNR) with significantly lower bitrates.
The two main questions one now needs to ask are between adoption is liscensing and encoding complexity. The former mainly affects larger platforms such as youtube and twitch. Which have stuck with VP9 and H.264 respectively, due to the inherent liscensing issues presented by HEVC. I won’t harp on about it. However for these platforms it is the main barrier for implementation. With torrenting however it has far more to do with encoding complexity. Which has unfortunately trended upwards with almost every new video encoder. As an extreme example VVC is about 9x the encoding complexity of VP9. Which puts an ever greater burden on decoders as well as end user hardware. Which for community driven sources is less tolerable. With the community still being slow to adopt HEVC with only fringe usage of AV1, it’s interesting to see how adoption rates change moving forward.
I personally don’t have the hardware capabilities to do accelerated AV1 encoding nor a paticularly good cpu to use SVT-AV1 with. As such I haven’t much experimented with the available options. What I will say however is that currently, the ffmpeg implementations (which are not the be all and end all of video encoding but by far the most ubiquotous tool) are lacking the options one commonly expects. To give two examples:
libvpx-vp9
ffmpeg -i input.mkv -c:v libvpx-vp9 -pix_fmt yuv420p10le -pass 1 -quality good -threads 4 -profile:v 2 -lag-in-frames 25 -crf 25 -b:v 0 -g 240 -cpu-used 4 -auto-alt-ref 1 -arnr-maxframes 7 -arnr-strength 4 -aq-mode 0 -tile-rows 0 -tile-columns 1 -enable-tpl 1 -row-mt 1 -f null -
ffmpeg -i input.mkv -c:v libvpx-vp9 -pix_fmt yuv420p10le -pass 2 -quality good -threads 4 -profile:v 2 -lag-in-frames 25 -crf 25 -b:v 0 -g 240 -cpu-used 4 -auto-alt-ref 1 -arnr-maxframes 7 -arnr-strength 4 -aq-mode 0 -tile-rows 0 -tile-columns 1 -enable-tpl 1 -row-mt 1 output.mkv
libsvtav1
ffmpeg -i input.mkv -c:a copy -c:v libsvtav1 -pix_fmt yuv420p10le -pass 1 -preset 5 -crf 26 -g 240 -pix_fmt yuv420p10le -svtav1-params -tile_columns 1 -tile_rows 0 tune=0 -f null -
ffmpeg -i input.mkv -c:a copy -c:v libsvtav1 -pix_fmt yuv420p10le -pass 1 -preset 5 -crf 26 -g 240 -pix_fmt yuv420p10le -svtav1-params -tile_columns 1 -tile_rows 0 tune=0 output.mkv
If one is not paticularly familiar with the voodoo which is ffmpeg, don’t worry about the specifics, more just that when going for the highest quality (without constrained vmaf or psnr) with the smallest file size possible for both I ended up with something like this (alongside having reasonable encoding times).
I a of two minds, using presets does allow people to focus less on the actual encoder implementations. But this falls back to the always dawning question on how much abstraction one should use. Having all the options available to you that are prescribed in the write up is an important part of a fully functioning implementation. One thing not included in the above paper is MPEG 5, which is split up into two parts, namely MPEG 5 part and MPEG 5 part 2. Part 1 is what one would imagine when thinking of a traditional encoder whilst Part 2 acts somewhat like a reference encoder pass. But from an entirely different codec. The aim of which is to decrease both file sizes and time to encode ie compression effeciency and computational complexity of already existing codecs. See the following presentation for more in depth analysis of Part 2. The tl;dr claim however is that an nth generation encoder in addition to LCEVC (MPEG 5 Part 2) can perform as well as an n+1th generation encoder. The other wish for MPEG 5 is to tackle the increasing computational complexity of new encoders it does this by having slightly worse compression effeciency relative to VVC (~10%) while having 3x less complexity. Which many consider the only economical methodology for future encoders.
This post is a collection of thoughts I’ve had about tech youtube for the past couple years. At the bottom there is a website update, if you’re interested.
Every time something novel releases and we turn to the tech influencers for their expertise on consumer electronics, they read off press releases and give benchmarks without a story. MKBHD’s video on OpenAI’s Sora, he doesn’t actually say anything. No hard opinions or facts, he gives the audience the same clips everyone else is playing (along with some other clips from Sam Altman), and not much for commentary. Yes, it’s conflicting that a youtuber is showing off a technology that could make his job more competitive, and he somewhat addresses that but never in a serious way. There is this pseudo-intellectualism from youtubers sometimes where they try to sound educated and nuanced, but when you think about what they’ve said, it’s just questions without any contribution to the overall narrative. A smaller problem is that every video/article has to establish the facts about the product in case this is the first time someone is hearing about the thing. They have to set it up for people who aren’t chronically on twitter, but after all of the boilerplate, surely different media outlets will do something unique right? They will provide some insight that the others didn’t catch, maybe a novel use-case or perhaps some hot takes. This is what’s missing, and that’s what I mean by mojo. I want youtubers to have stakes again. I want to see a wet Linus walking down a Japanese street again.
Maybe the content hasn’t gotten worse, but I’ve gotten better. Over the past two years, I’ve gone from knowing a lot about consumer electronics to learning about software. At first, it makes sense to draw a line between the products I see and use and the software that powers them, but unfortunately no one makes content on this front. You can’t make content about this because most of the time the software isn’t open source. There’s nothing to talk about except for using the products and the landscape the product falls into. This is where tech youtube is still very valuable to me. When I hear about a new pair of headphones, a laptop, or a VR headset, a well written review will compare the gadget with its competitors and I can get a grasp of the bigger picture - nothing exists in a vacuum. It’s almost like I outgrew the consumer electronics content I used to enjoy. I would watch a review on a keyboard or a graphics card with no intention of ever buying it but I just wanted to know what’s going on.
But more and more lately, I’ll watch a review and straight up not learn anything. Sure, the phone exists and it has some specs, some ups, some downs, but the picture is complete. For phones, I give a pass because the market itself is so extremely uninteresting for anything I care about. There is fierce competition in the 600 price range, but above that I’m just not going to care and that’s fine. Companies figured out phones for the most part, the fact that each generation is incremental is… fine. I don’t like it, but until companies start making interesting products, I don’t expect the content around them to be interesting (again, for phones specifically). However, in my next phone purchase I will be looking to put an open source OS on it, like GrapheneOS, so I’ll be looking out for that, but that’s not going to be until my iPhone 13 Pro takes a dive. As much as I (now, thanks in part to Natalie) have a disdain for Apple’s business practices, they make really good products that I will probably still recommend to people. When M1 came out, that was big and content around it was alive. People were excited again in a way that I wish we could maintain in a slow cycle.
Take 1: Creators are only as good as the products they have in front of them. The media is a reflection of their audience so if we are going to be bored, they are going to be bored. It follows, then, that when there are boring products there will be boring content. If a creator only shows benchmarks and gives a general guide on whether a product is a good/okay or an obviously bad buy, then that’s indicative of a company doing very little to excite their to-be customers.
Take 2: The inability for creators to make a “sum greater than the parts” of the products they have in front of them demonstrates that they are lazy parasites who only do well when thousands of engineers invent a new toy. This demonstrates that the value add of creators is so little that you are better off reading press releases and spec sheets yourself.
After spending years watching consumer electronics content, I now have the muscles where I can see some benchmarks, compare the products myself and come out with an idea of what graphics card my friend should buy, or what laptop I recommend to my family member. Again, I got these muscles from these youtubers and I just hate that I’ve outgrown them. I know they can provide more insight than what I have because that’s their job! They spend the most amount of time on these things, and I don’t see how it’s possible that my intuitions are on the same level as their research… unless, of course, they aren’t doing much research at all.
Sorry I haven’t provided a lot of specifics, of course this is a general, nebulous, vibe of the content. If I were taking this post more seriously I would provide some proper evidence. I will say that I stopped watching LTT and told my youtube algorithm to stop recommending all of the LMG channels because there was a period of a couple months where I didn’t learn a single thing from any of their videos. Months! I would watch all of their videos (except for the quirky cooling videos, those were always boring and for physics nerds), even through the controversies. I wish LTT as a brand didn’t water itself down to the degree that they did. Developer content has been more of what I enjoy. Software is a rabbit hole where to be an expert in even the surface level fields will take multiple lifetimes (Rust joke here). It’s a treasure trove of interesting technologies and history of those technologies (see: X11, LLVM, and yes, web dev) that are currently being iterated on. We haven’t learned what makes the best programming language because there isn’t one. There are players currently rewriting the book on something as simple as “How do you make a mobile app?” with the new React Native, or new programming languages like Rust and more recently Gleam, or new ways to think about your operating system configuration with NixOS (potential Nix arc inbound). Above all, this hobby is more interested in creation.
House M.D., Season 3 Episode 1:
Dr. Wilson: The fifth level of happiness involves creation, changing lives.
Dr. House: The sixth level is heroin, the seventh level is you going away.
If your “hobby” revolves entirely around consumption, then it isn’t really a hobby. It’s a grocery list. Likewise, being an expert on these things is good for being the tech wizard of your friend group or your family, but, to go even further, having opinions is not a real hobby. Especially if those opinions are those that you copied - at that point it’s literally just a list of things you know.
This is why I watch so much of Theo and ThePrimeagen. They made a space on social media for senior developers to talk about senior developer topics. You don’t see them making tutorials on the difference between let and var in Javascript - you see them talk about best practices from their professional experience. Prime does something like tutorials, but when he does it it’s in a classroom with in-person participants, a projector, and a script. But the bulk of his content is talking about the industry, by way of going back and forth with Twitch chat and reaction content of other people talking about the industry. This content is much less about my desire to create and more about my interest in their world. In this space that they’ve made, I get a glimpse of a world of countless software engineers who go to work to create. And these people aren’t selling anything, they talk so much about their jobs because they have passion. I like hearing from passionate people.
I would like to mention homelab youtube. This gigantic corner of youtube also has passionate people who are just nerds. They like setting up their servers with their Home Assistant, their Plex boxes, and whatever else, because they enjoy integrating technology into their everyday lives using open source software. However, every time I look into these people’s content, it’s a) always some kind of tutorial that I’m not interested in doing right now, b) talking about some freemium SaaS that I ask myself “isn’t the point of all of this work to get rid of those guys?”, or c) an exploration into a use-case that I could never see myself doing. No, I don’t want my coffee maker, window curtains, and lighting to automagically do its thing when I wake up. I don’t like tech like that. Keep it separate - my tech lives there and I can walk away from it when I want. Yes, my coffee maker has a semiconductor in it, but it sure as hell isn’t going to have an app.
And again, that’s why I’m disappointed in what MKBHD and LTT have become. It’s like they read so many Verge articles that they forgot that they are not the Verge. They are allowed to have stakes and hot takes. The whole point of creator media is that you can cut through the nonsense and say something authentic.
PS.
The website has comments now. This was the last feature that I cared about adding. If you’re interested in how I made
comments work on an SSG site in Astro, you can see the source in the /gh/
link at the top of the page. Natalie wants
to make it so you can do “>>number” to reference a previous comment, and I’ll get around to that at some point. As well
as paginating the comments so if there’s more than X, there will be a Page 2 with another X number of comments.
I don’t think I’m going to rewrite the site in Go. Astro’s templating language is just so, so good. Even templ as good as it is, in a language much better than Javascript, isn’t nearly as good as what Astro has built (ironically, the Astro compiler is built in Go). I started to mess around with porting over the components we made. I’m willing to put in the work to make it happen, but in the best case scenario it’s just worse in every way. I’m trying to think of ways to bring the wins from Astro into Go - like literally what the ideal solution would be, all skill issues be damned, and I just can’t even think of what would be better, especially with the constraint of no new file types. I have a PR where I rewrote someone else’s PR for making templ’s documentation website templ.guide using templ (right now it uses docusaurus, an SSG framework made by facebook using javascript). I know how to do it, I just can’t justify it for our site. Component composition in Astro is good, the tooling is good, and at the end of the day, this site is still SSG. The only server that exists on the site is the endpoint I had to make for posting/getting comments. This does mean, however, that the site uses htmx for a) getting the comments on pageload and putting it in the dom and b) sending an error message in case of an internal server error. This does break our promise of “no javascript unless specified otherwise”. I don’t know how to specify that the site uses this library in a way that makes sense for the site. Maybe we’ll have a javascript notice with an explanation at the top. Natalie, this is something you care about more than I do, so tell me what you want that to look like. Or, I’ll resolve some skill issues and handroll the javascript necessary to get the comments. There is no way around javascript unless we build the entire page on each page load through server-side rendering. In my opinion, having a Node server building every page on load for every request… for a blog site is a bit more egregious than some client-side JS to request the 1% of the site that is dynamic. Again, Natalie, tell me what you think and I’ll do it.
In any case, it is nice that (besides the earlier feature of mentioning previous comments), this site is Completed. Natalie can do the quirky things that I love her for doing with the site, she knows how to do it, and I can rant about stuff in a way that really brings me joy.
I read (listened to) The Fourth Turning a few years ago and it is such a compelling narrative that I wanted to believe it. It gives this grandiose vision of American history. Basically, it says that society exists in one of four turnings in a cycle: a high, an awakening, an unraveling, and a crisis. We move through each of these approximately every 20 years, and at the time the book was written (1996), we were on the tail end of the unraveling, and so the authors predicted a crisis turning was up next. They went as far as to try to predict some of the things that might happen to usher a crisis.
This isn’t a book summary, again I listened to the audiobook like five years ago and I don’t recall the details. I just had a thought earlier that I’m not sure if the way we think about generations fits anymore, if ever at all. I remember when I was in middle school, Vsauce did a video on generations and mentioned The Four Turnings book. He clarified that it’s not just criticized, it’s also unverifyable. I remembered him mentioning the book when I was on a big audiobook spree and actually listened to it. Again, it’s mesmerizing. “We’ve been though worse, we’ve been through many crises, this one is not so different. Maybe we can also get through climate change.” It gives a sense of honor in the struggle. If these things are possible to get through with the gumption of a generation, then well it’s my turn. Every person goes through the Four Turnings like seasons through the year. The GI Generation had to fight their wars, and that was the previous crisis (about 80 years ago!).
Before I get to my point, I just want to point out a couple big problems that I’ve had in my head since listening to it. First, it’s very America-centric. Maybe that’s the point, but I also think if you want to make claims about human nature and society wide trends, choosing a country which has only lived for three 80 year periods is not a great place to start. Also, is there a particular perspective by which we’re making these claims? Is this about “the average person”? Is it fair to choose an oppressed class as the arbiter of what is a crisis? If you choose the narrative that historians use for these things, you’re not actually choosing anyone in within the historical context. It’s like how we write about kings and queens of feudal times, but most people didn’t care about any of that stuff.
Five years after the book, 9/11 happened. Which by the America-centric view of the book would most certainly count as the crisis catalyst. In their list of potential catalysts for a crisis era, they even mention a terrorist attack on the homeland. Since then, it doesn’t seem like the crisis is going to end anytime soon. We ended our wars with Iraq and Afghanistan. America right now is the closest thing to peacetime as we can expect. But do you feel like we’re in peacetime? Did leaving Afghanistan actually permiate throughout society? Are we just used to wartime that it stopped mattering entirely except for if you have a personal connection to a wounded veteran? Or was the war effort in the later years so pitiful because we were just trying to prop up a government, instead of actually fighting? All this to say, if the crisis was started with 9/11 and the consequential (illegal and unjustified) wars, why did ending them not result in …anything? No one celebrated, no one cared, the media and the Republican party just got an excuse to get some easy dunks on Biden, everyone moved on. Trump and the current Republican party sprung up during this 20 year period - a new crisis. And unless 2024 is a blowout victory for Dems (it won’t be), they will persist in our government causing havoc. And then there’s dealing with the effects of climate change.
Someone’s internet diet is much better tell of who they are than where they are from. Internet space replaced physical space in this sense. I used to think that the generational gap exists with people who remember times before the internet existed, and those who see the internet as a natural fact of life. I’m very much in the latter. But, as more younger people have different internet habits than anything I’m familiar with (see: using tiktok as a search engine), I’m realizing that someone’s internet diet can vary so much among all ages that it stops being a useful fact to look at. Yes, content on the internet skews towards younger people, but any more generalizations beyond that and you get into vibe based territory, especially for something like YouTube. Say age was a useful metric for determining someone’s internet diet - couldn’t you just ask them about it? Couldn’t you get their interests and habits from other things about their personality? In what circumstance would you use age to assume stuff about someone and actually act on that assumption? You can’t even mention technical literacy, because there are people of all ages that aren’t comfortable with technology. Maybe they know about their corner of technology - music creation or running a plex server, but not how to forward a port on their router or the minutiae of Windows settings.
Generations can be a useful tool for tracking a cohort with things like homeownership, how often they change jobs, or their average savings accounts. These are economic data points that are real, actual data points. Things we can measure, and generational gaps might exist and we might use the framing to answer questions about the data. But I’m a bit tired of someone asking “Am I a millenial or a Gen-Z?”, or even worse, debating what the “cut-off” is of any given generation. The cut-off doesn’t exist! What do people think? Is there some authoritative body that makes these kinds of things? Who would make them? Why should we listen to them? I think if someone asks this question, it says a lot about how they view information as a whole. I get not having a good grasp of how something like this might work, but asking this question tells me that your critical thinking of new ideas is really, really shortsighted. What I really answer with is that different people come to different conclusions about the exact years, but let me pull up the wiki for the general sense. I wish I could figure out some way to bait this conversation with an unsuspecting victim and socratic method my way into making them realize they don’t just know nothing about how the concept of research on generations, but they have no idea how ideas work. And there’s not much to it! Tracking people by generation is mostly tabloid nonsense to get older people to freak out about younger people doing things differently, or to judge them for their avocado toast.
I think that’s why Millenials aren’t engaging in this as much with the judgemental discourse. They entered the economy just before, during, or soon after the 2008 recession. They get that it’s rough out there, and they remember the “Me Me Me” Generation nonsense. I’m not against using these labels to describe cohorts - I’m saying that people 99% of the time use the labels to draw vibe-based conclusions to the point where the concept shouldn’t be taken seriously anymore. So few non-economic things in our society are delimited by age. And for the things that are, you still can’t construct a grand narrative. If you do, I’m immediately suspicious.
Here’s an example:
IDEA: Boomers are greedy because they use their outsized influence by way of voting, lobbying, and capital allocation to not build new housing. This causes an excess strain on younger people to pay more and more for rent, in a world where wages are not keeping up with inflation.
I’ve heard this take a thousand times, but if you swap “Boomers” with “Homeowners”, it makes more sense. Homeowners want to protect their investment, and as long as housing is treated as an investment, it will be against their interests to build new housing. Woah! We just swapped out a meaningless label for a useful one and didn’t have to adhom at all. It’s never about age! Sure, older people have higher rates of homeownership, but focusing on the older people is a misdirection. It’s never about age.
Also, I’ve always been interested in the non-American aspect of this. Do all western countries use the same age brackets for these things because of WW2? Did they just copy the Gen-X/Gen-Y/Gen-Z/Gen-A convention because of American researchers? Did they copy it all? Do people in other countries use them just through the cultural osmosis of the internet? Do non-English speakers make these categories?
What if instead of generations being when people were born, we talk about it from when the enter adulthood? Someone who is older than me, but lived with their parents longer and was particularly sheltered probably doesn’t have the same cultural understanding of their cohort. Maybe they relate to younger people being their growth in childhood was slowed. Entering the workforce or otherewise adulthood is a much more meaningful year for thinking about this than when someone was born.
People talk about pop culture from when I was a kid, and I have no idea what they’re talking about. You can’t expect someone to be familiar with pop culture for every year after they turn a certain age. And if that’s the case, then saying “Gen-Z nostalgia” is a nonsense phrase. You can have nostalgia for cultural moments that have passed! That’s more than okay! It’s just when people use the generational labels, it is an absolute, otherwise you would use some other label. If you admit “yeah of course not every Gen-Z is going to know this, in fact some Gen-Z’s are going to be born after this cultural thing!” Then you’re just using a funny label because you can’t think of a better one (or you want clicks from people debating whether something is or is not Gen-Z).
I do have a suspicion, that my hate of using generations as a lense to view society is more towards how people use it than the idea itself. Maybe if I read the original two books on this, then I’ll find out that there are more applications than just tracking people of different generations throughout the economy. So, I think I’ll read/listen to Generations (1992), and The Fourth Turning (1996), both my Neil Howe and William Strauss and come back to it. I want to have some evidence that our current situation was destined, and the conclusion is hopeful.
This post started inside of google docs where I was planning a video I wanted to make. Yes, a video! I want to make video content. And to do that, I want to lay some groundwork. I want to do it right. So, I want to make an Introductory Video, much like how I make an introductory blog post.
Before we get to my video content ideas, a quick aside regarding the website.
Natalie and I have really fun ideas for the website. I want to do things like have an email list, have a comments section, make an email server for @reluekiss.com, and other stuff. The problem is Astro (the JS framework that makes this website now) is not Real Programming. I enjoy Real Programming. When I run into a problem and solve it, I can rely on my previous Real Programming skills to help me, and afterwards I add that problem to my stack of Real Programming skills. When I have an issue with Astro, like in the
import.meta.glob
situation, there is not a Real Programming skill that will tell me what the right answer is. All of this to say, I’m rewriting the website in Go here soon, and you can bet your bottom dollar I’ll be making posts about the process. Specifically, hosting SSG vs SSR (the normal way) is something I’m super curious about.
Now let’s talk about this video.
The purpose in making videos in the first place is just so I have more reason to look into new things, learn, and because I think it will be fun. The video gets me to explore a topic and it makes me form my thoughts into something coherent, which also helps with learning. Mayhaps I am only learning a topic for a video, and that’s okay, as long as I find it interesting and it’s not a very low hanging fruit.
I would like to talk to the some-number-of-people whose brains click in the same way I do. And because rephrasing isn’t enough, I could contribute more by making the tutorial code Real Programming code. As in, maybe not production ready, but not awful. I don’t like how people in tutorials constantly tell you “this is not how you would really want to do it”. It’s like, why are we talking about this at all then? I am not looking for the bare minimum, worst-but-also-easiest implementation, I’m looking for something that is useful to me. If that’s too much to talk about, then break it up, handwave sections of the example code and tell your audience that it’s in the linked git repo, explain it in another video (or blog post!), but at least put in the effort to make it not awful because “it’s just an example”. I can also accept the “common” way to do something, and then throw in where you would optimize. For example, using gorilla/websocket is a common way to get started, but in this talk, Eran Yanay points out a bunch of different optimizations you can do. I’m not saying every tutorial needs to be this quality. I’m just saying the right answer, or something that helps lead you to the right answer, should be more prevalent than the wrong answer. I want to do a video + blog post about auth because holy moly there actually is zero resources for a production-ready roll-your-own-auth thing (probably because there’s so much money in auth).
This blog is one of my favorite things. Video is not a replacement of this beautiful
website Natalie and I have been making. It’s more like I have an idea of what good content
looks like in my head, and sometimes video makes more sense. For example, when I do posts
like the import.meta.glob
, I get frustrated at myself for not being able to express what
I want. Writing is hard! Like, really hard. And I think that a video where I do a code
tour is a much simpler way to teach. Other posts, like my Abstraction
Essay don’t click in my head as a video. I
think I wrote it really well, and it is very much an essay! Video and writing are not
mutually exclusive.
I think it would be cool to make a blog post, and then a video going into things that I couldn’t fit in it. I cut a lot to keep the shape of a post relevant. When I go on a tangent in one of these posts, I feel like I’m losing grip of what I’m trying to express. But I like rambling! There has to be a middle ground. Maybe one of these days I’ll make a blog post where I’m not allowed to delete a sentence if it has already been written. This post has almost none of its original contents from what I had planned.
Feel free to join our IRC channel to talk.
tv.reluekiss.com:6697
With the advent of networking we were suddenly able to send each other messages via telegrams in digitised forms almost instantly, though the likelyhood you were a part of a BBS network as they were called (were you connected to a large mainframe). Where you had an account of your own, and could communicate(?) with others. These rather primitive methods slowly gave way to optical telegrams or ‘terminals’. This whole series of events has quite an interesting recounting by one Stephenson Neal in his book ‘In the beginning was the command line’. I won’t link to it as it’s a text file and one can find it quite easily.
But with the advent of these bbs networks, came the creation of the first IRC servers. Internet relay chat or IRC as I shall now refer to it, is a standardised federated instant messagin service with only enough bandwidth to support text. I would argue that the advent of anyhting additional could be made up with ftp and dcc just fine, but alas.
The IRC of today looks a little different today then it did 40 years ago. With quite a few nicities that only the most staunch networks don’t follow, namely bouncers, ssl/tls/sasl and cloaks, the first is a proxy by which one can stay continually connected to a server without missing any of the messages the people in the channels you wish to read are talking about. Though these days it is also possible to implement sql databses to keep track of conversations that is by no means the norm. The second actually allows for encrypted connections and my server only allows ssl/tls connections. For the security of everyone connected, as any one person not using it will be transmitting all messages unencrypted. (You needn’t worry though as it’s simply a toggalable option on most modern irc clients). Finally cloaks allow your ip to stay hidden to anyone else inside the server apart from the hosting server.
I could drone on about the multitude of clients forever, but the following guide is very informative on setting up your own clients and even servers if you were so inclined.
Honestly it’s a great protocol, and I would recommend everyone try it, the little bit of reading and application does everyone’s brains some good. But for those interested our server can be found at tv.reluekiss.com:6697 (we might buy a new subdomain for it, but we really have no need).
So, alsa, I began using it for quite practical purposes in that my laptop’s internal microphone would choose to not work with pulseaudio. The sound enviroment that my distribution (void linux xfce glibc) shipped with. Figuring if I went further down the stack I would have more control as to what on earth had gone so terribly wrong that dozens of hours turning random dials in the forms of altering configuration files to no avail. So instead I set my eyes on the monolithic asoundrc file. We will start where I started my journey, figuring out, why on earth could I only hear one source of audio at a time? And the answer is, dmix. Which will be a common theme. I would like to point out this is not a complete guid detailing all of alsa as there are a lot of use cases and exponentially more configurations. But these are things that I have learned with some hopefully motivating reasons.
Lets first aquiantence ourselves with quite possibly the simplest configuration file we can. Which can either be found as .asoundrc for a user and /etc/asound.conf for system wide.
#01
pcm.!default {
pcm "hw:0,0"
}
ctl.!default "hw:0"
Now there is still a bit to get through here. The first thing to learn about are keys and values where you can have a variety of delimiters between the two look here in the arch wiki in section 3.1.1 first to have a look at them you reference subkeys of keys using a ”.” as you might a method in a lot of programming languages. For example our ctl.default “hw:0” is a perfect example of key.subkey value.
A quick aside, “hw:0,0” refers to the paticular sound card in your laptop,
which you can get a more detailed output for using aplay -l
, also, if
you ever wish to change the order of them you can do so by editing the
/etc/modprobe.d/alsa-base.conf file, explaning what occurs beyond indexes is
outside of the scope of this. But for an example it could look something like
mine
options snd-hda-intel index=0
options sof-intel-dspcfg dsp_driver=3 index=1
options snd_usb_audio index=2
options snd-aloop index=4 enable=1 pcm_substreams=4 id=Loopback
I mentioned earlier how with just this we wouldn’t get very far as what if we wanted to listen to multiple sources of audio (PCM streams) at once, or having a microphone, or even just more speakers. In time, but first we shall talk about plugins. Namely dmix, dsnoop and asym.
With a short explanation, dmix is a software mixer that allows you to overlay two audio outputs on top of one another. While dsnoop does much the same but for audio inputs. Now unfortunately it’s useful to know some jargon here. As nice as inputs and outputs are, they can be restrictive when you start talking about loops and multichannels, and what is an output or an input. So, someone decided to name them sources and sinks or inputs and outputs. You will see these on forums if you ever try to look up more stuff. So, look out foor it.
So our new configuration
#02
pcm.!default {
type asym
playback.pcm "dmixed"
capture.pcm "dsnooped"
}
pcm.dmixed {
type dmix
ipc_key 1024
ipc_key_add_uid 0
slave {
pcm "hw:1,0"
period_time 0
period_size 1024
buffer_size 4096
channels 2
}
bindings {
0 0
1 1
}
}
pcm.dsnooped {
type dsnoop
ipc_key 1025
slave {
pcm "hw:1,7"
period_time 0
period_size 1024
buffer_size 4096
channels 2
}
bindings {
0 0
1 1
}
}
Much here is the same where all we need to remember is that we can feel free to nest as many keys within each other as we wish such as slave inside of dsnooped. I would like to point out that dsnooped and dmixer are completely arbitrary names used to determine the names of the pcm streams. The period time, size and buffer size are actually a more round about method of setting a refresh rate. Where here which is the refresh rate we all know. Though do feel free to replace it as
#03
pcm.dsnooped {
type dsnoop
ipc_key 1025
slave {
pcm "hw:1,7"
rate 44100 #48000
channels 2
}
bindings {
0 0
1 1
}
}
The bindings just make sure that the left channel goes to the left channel and the right channel goes to the right. The ipc key gives it a unique id so that there isn’t crosstalk between pcm streams. This is currently not that important but when we introduce the next part it will. Which is loopbacks and multichannels. The former sounds quite simple, you have one source or sink and you would like to pass it over to another. For example, to be able to hear the input of my microphone through my speakers.
The reason I came across this was to be able to use a music visualiser named alsa. (Which is very cool if you haven’t heard about it check it out here.
First we need to set up loopback, which to do we will need to download the snd-aloop kernal module from your package manager of choice. As I use void it would be
sudo xbps-install snd-aloop
Then using /etc/modprobe.d/alsa-base.conf as we mentioned before we must insert the line “options snd-aloop index=4 enable=1 pcm_substreams=4 id=Loopback” where the index is the order it is loaded enabling enables it, pcm_substreams limits the number of virtual devices to 4 rather than 8 which feels like too many. And the id, being the name of the device.
Now we can return to the asoundrc
#04
pcm.dmixerloop {
type dmix
ipc_key 2048
ipc_perm 0666 # allow all users read write permissions
slave.pcm "hw:Loopback,0,0"
slave {
period_time 0
period_size 1024
buffer_size 4096
channels 2 # must match bindings
}
bindings {
0 0
1 1
}
}
pcm.out {
type plug
route_policy "duplicate"
slave.pcm {
type multi
slaves {
a { channels 2 pcm "dmixed" }
b { channels 2 pcm "dmixerloop" }
}
bindings {
0 { slave a channel 0 }
1 { slave a channel 1 }
2 { slave b channel 0 }
3 { slave b channel 1 }
}
}
ttable [
[ 1 0 1 0 ]
[ 0 1 0 1 ]
]
}
pcm.looprec {
type dsnoop
ipc_key 2049
ipc_key_add_uid 0
slave {
pcm "hw:Loopback,1,0"
period_time 0
period_size 1024
buffer_size 4096
channels 2
}
bindings {
0 0
1 1
}
}
I would like to add that #04 is appended onto the end of #02 but for brevity #02 is not included here. The thing that stands out here is that middle key block where we have a plug and a multi plugin. Plug is actually a generic term for any plugin and can be used to simplify nesting multiple keyblocks. Multi is where the fun begins. We assign two channels to both a and b where a is our current source dmixed and dmixerloop is a dmix wrapped loopback. specifically a sink loopback which is signified by the first subdevice being a 0. (Loopback,“0”,0) As loopback modules are connected to one another by their second values.
Loopback,0,0 <-> Loopback,1,0
Loopback,0,1 <-> Loopback,1,1
Loopback,0,2 <-> Loopnack,1,2
So anything sent to either end with be broadcast to the other. what we do on the multichannel is a one way broadcast. ie dmixed -> dmixerloop -> looprec But thats just fine as looprec is a source and can be put into programs like obs to record our audio or as a music visualizer.
There is one more plugin that might be worth visiting if you have custom sound controllers
pcm.softmixer {
type softvol
slave.pcm "out"
control.name "PCM"
control.card 1
}
Where you would change “PCM” to your specific hardware controller of your choice allowing you the ability to change the volume for that paticular stream. This is especially handy if you have multiple outputs you wish to tune the volumes of differently.
Now as the one section with a title it’s quite different. And very much optional especially given that bluez has ended support for alsa officially in version 3.0.0. However a very dedicated person created a bridge called bluez-alsa. Which allows you to still use bluetooth. The installations of which should be fairly straightforward using your package manager. If your package in your repository is out of date like mine was. You might need to compile it yourself. However there is a very nice installation guide in the wiki for bluez-alsa.
The basic method they outline is to use this configuration where XX:XX:XX:XX:XX:XX is your bluetooth devicces mac address.
device "XX:XX:XX:XX:XX:XX"
profile "a2dp"
service "org.bluealsa"
ctl.device bluealsa
This is fine, if you don’t mind no audio mixing. They also outlined a method to use dmixer which makes use of this added piece of configuration
pcm.blueout {
type plug
slave {
pcm {
type dmix
ipc_key 1026
slave {
pcm {
type hw
card "Loopback"
device 0
subdevice 1
}
}
}
}
}
With the command given here alsaloop -C looprec -P bluealsa:DEV=XX:XX:XX:XX:XX:XX,PROFILE=a2dp -c2 -fs16_le -t 20000
The system services you will end up needing are bluetoothd, bluezalsa and alsaloop. The first two are simple, but the third you will need to create yourself as each time you reconnect your device it will end the command unless a service manager reruns it. I use runit, which has a very simple way of creating service files where I create a file
$ mkdir -p /etc/sv/alsaloop
$ vim /etc/sv/alsaloop/run
#!/bin/sh
exec >/var/log/alsaloop.log 2>&1
exec alsaloop -C looprec -P bluealsa:DEV=XX:XX:XX:XX:XX:XX,PROFILE=a2dp -c2 -fs16_le -t 20000
ln -s /etc/sv/alsaloop /var/service
With that you should have a function bluetooth device. Now, I’m not quite satisfied with that, so I made a couple scripts which I use in conjuction with dwm to make switching between hardware and bluetooth easier.
Which you can find in my git repository here.
Recently I have started reading and watching a lot of media from when I was younger for a variety of reasons. But mainly a combination of nostalgia and also wising to understand what others might refer to as classics. There are a lot of sites not much unlike our own that discuss the trends of the internet and how they have diverged from what they once were.
However I’d more like to discuss the efforts people take to try and recreate that in a containerised form. From hosting all your services yourself to recreating media such as these videos by some japanese guy that uses blender to create low poly cgi models to create animations and then records the output to VHS. This is honestly quite well done in an attempt to recreate old media, especially given that the result was recorded and then digitised from VHS, now there are a couple things that gets me: one is the inconsistant ‘low poly count’ as the hands stand out for me, relative to everything else in the scenes. you either go low poly for everything equally without any textures or reflections, or you go high poly but keep the shapes basic without too much detail. the animation is also just, too … smooth? in the 80s and even part of the 90s doing 30 FPS or higher in 3D was very expensive. artists would stick to the “cinematic” 24 FPS if they could afford it, if not they would only go lower.
I can’t put my finger on this one, but almost every Blender render looks modern. I don’t know if it’s shaders or just the renderer itself but it always has that very unique Blender look that feels modern.
Regardless, I might set up an irc server, I’m not sure how many people might use it but again an article by koshka about the benefits of text only communication. I’ve always been enamoured by it but the communities that exist there are very much centered around tech only topics or are preexisting friend groups. Which are great but alot of the other fringe cultures now seem to be in imageboards or directly in corporate walled gardens. No longer will we have shenanigens (though channel raids are cringe af).
Russian:
Sashok: Здравствуйте, это канал об аниме?
Да.
Sashok: Как мне пропатчить KDE2 под FreeBSD?
English:
Sashok: Hello, is this the '#anime' channel?
Yes.
Sashok: How does one patch KDE2 under FreeBSD?
Oh well, perhaps I might create something more cohesive in a bit.
Dear Natalie,
Sometimes I’m annoyed at my own music habits. In 2021, I had a bit of a mission to actually get in to music. I’ll make playlists, curate them to ever which mood that I listen to music in, I’ll track those habits and change my playlists accordingly. At the peak, I was managing probably 15 or so playlists. I had around 40, but a lot of them were seeds I was planting that never saw the light of day, or playlists that I had just for statistics. Sorting the songs is something that I prided myself in. When I turn on Spotify, I’m reaching for something. I found all of the things I reached for and made a playlist for it.
But I got a bit carried away. It became a massive spider web to keep up with. Two examples of this would be It’s Called: Freefall and Wishful Thinking. Both of these could belong in CALM, (which really just turned into indie rock), JAM, and at least Wishful Thinking could belong in DREAM. To help with this, I had this thing where the first four songs are immovable. If I had this question, I would look to those four songs. They made up the thumbnail! It’s basically a Constitution.
Curating these playlists followed a simple workflow. First, I would find music by using the radio or through traditional methods. Normally, this was recommendations from friends, tiktok, the Spotify radio, and my personal favorite: shazaming the song that someone is playing in their car while I’m riding with them. It’s great when I’m riding with someone one day, steal a song or two, and then when they’re riding in my car, they hear their song! And I am very forthcoming about how I stole from them. If there’s a universal rule I can say about humans, it’s that they like getting their taste in music validated.
Once a song is selected, next it’s added to Liked Songs. Liked Songs is cheap. Everything goes through Liked Songs. Liked Songs is ephemeral. The goal of Liked Songs is to not have Liked Songs. Liked Songs is the firewall protecting my playlists from having hastily-added songs in them. A playlist should not have a song in them that wasn’t first listened to thoroughly for quality and relevance. The last step of the process is to purge Liked Songs. Either promote promising pieces into perfectly procured playlists, or rescind the refrain’s reservation, recognizing return is rare.
The funny thing is, I did not do the final step in all of 2023. Today I cleared 173 Liked Songs - the earliest dating back to November 2022. This is what I meant by my listening habits sometimes annoy me. I stopped listening to Spotify as much in favor of >1 hour mixes/albums on YouTube, notably post-rock, lofi, synth, and more recently breakcore and jungle. All of these genres have something in common in that it’s hard to distinguish between any particular song unless you have a deep appreciation. I can remember a rapper’s voice, but a guitar solo? It’s hard to pin that on a specific artist. As such, discovery seems more difficult and why bother when the mixes that the YT algorithm gives me is so good? I can tell your nonsense detectors are going off because they’re going off for me too. I’m kind of dependent on the algorithm, not just to new find music but to play music that I like! I do have a YT playlist of certain mixes/albums I like, but I’m not meticulous at all at keeping up with it.
On average, my time listening to music on YouTube is worse than on Spotify because of how I use the platforms. I have missed listening to songs like the one we listened to the other night, Posing For Cars by Japanese Breakfast. Focused listening is what I need more of. Post-rock and lofi are great, but they don’t make me sit back in my chair, interrupting the article I was reading or the code I was writing. Sometimes I don’t want to be interrupted, but what I’ve learned is that there is such thing as too much background music. I should allow myself to be interrupted by fantastic music.
Tonight I finally put in the elbow grease into fixing my Spotify. I purged my Liked Songs, even though I probably hastily added some and unliked others. I also combined several playlists together - such as my COPII and DARK playlists, now it’s just ‘synth’ (new name pending). This is because there is a surprising amount of overlap between a typical song in COPII (named my friend who got me into this music), Moonlight by XXXTENTACION and songs like почему,почему? (why,why?) by niteboi that were in DARK (very creative naming, I’m aware). If you don’t think they are similar, that’s fine, I could probably find a better example.
These combinations were hard to do, and I did them to my favorite playlists. I hope this consolidation makes the choice overload issue go away, as well as allowing my to properly sort songs to where they should go. A symptom of too many playlists is that the same song might go in two or three, possibly four! That’s not great! I’m thinking about making a website where it finds all of the duplicate songs and forces me to pick one playlist to put it in. Might be fun.
It’s funny because it’s almost like I brought technical debt into Spotify. Just like my Factorio world, and most certainly my playlist-powertools project, sometimes a refactor is necessary.
I’m very excited to listen to music with you.
Love you sweetie,
Nathan
This site came with some interesting challenges for me. I wanted to make a statically generated site, that is accessible to Natalie, that allows for .html or .md/.mdx and
has good primatives for making sure what we create on here isn’t lost to the sands of time. One of those things that we did was making a little object at the start of the
posts for who wrote it, the date, the title, we’ll go more into it in a minute. Tricky thing is, I wanted to customize the AstroJS build step, something that is tricky
using the Astro.glob
function they provide. If you use this, you have to use it within a .astro file, and if you’re within a .astro file, then you can’t really import
a specific JS function. Luckily, Astro.glob is just a wrapper around import.meta.glob. This means we can use our own
glob function, and put it in a .ts file.
This post is going to explain how to get import.meta.glob
to do what you want it to. If you’re underwhelmed by getCollection
and the other default tools that Astro
provides for content management, this is for you. For those uninitiated, say you have .mdx files inside of blogs/
, and your webpages inside of pages/
. You might use
either of these functions to not just parse these files, but Vite will do some really cool stuff with actually understanding the content of the file. Check the docs for more,
because our usecase is very specific: read .astro/.mdx files, grab the metadata from them, do some extra processing, and make a new route for them on the website. All we have
to do for this is create the file inside of blogs/
, and make sure the thing builds. If it builds, it’s good to go!
Here’s what we want to write:
export type BlogDetails = {
title: string;
date: string;
author: string;
overrideHref?: string;
overrideLayout?: boolean;
description?: string;
image?: string | string[];
tags?: string[];
hidden?: boolean;
aria?: { [x: string]: ImageAccessibility };
};
And here’s what we want to work with throughout the code:
export type ImageAccessibility = {
alt: string; // a description of the image
role?: astroHTML.JSX.AriaRole; // list of image roles: https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Roles#roles_defined_on_mdn
ariaDescribedby?: string; // if you describe the image in an HTML element, use give it an it like id="carpark-description". that way the screen reader can say "this div describes the image"
loading?: astroHTML.JSX.ImgHTMLAttributes["loading"]; // set to "eager" if image is essential to the post, "lazy" if it is not. default of this is lazy.
};
export type Image = {
url: string;
size: string;
ext: string;
filename: string;
fullname: string;
accessibility: ImageAccessibility;
};
export type Post = BlogDetails & {
id: number;
globbedImgs: Image[];
relativeUrl: string;
absoluteUrl: string;
dateObj: Date;
Component: AstroComponentFactory;
};
So two goals: make a route to each of the blog posts, and shape them into Post
objects.
Example of BlogDetails` inside of a .astro file
---
export const details: BlogDetails = {
title: "Images-in-the-terminal",
date: "2024-Jan-01",
author: "natalie",
image: ["LainLaugh.gif", "ncmpcpp.png"],
aria: {
"LainLaugh.gif": { alt: "an animated girl laughing" },
"ncmpcpp.png": { alt: "a terminal window with a music playing program open, complete with song picker and audio visualizer", },
},
};
---
In .mdx files, we would make it in yml format, like this:
---
title: Looking Forward to the Future
date: 2024-Jan-20
author: nathan
image: excitementometer.jpg
aria:
excitementometer.jpg:
alt: a gauge of excitement, towards high
---
Astro does provide a useful type definition for this, in types.ts
, we can do something like this
export type BlogAstro = AstroInstance & {
details: BlogDetails;
};
export type BlogMdx = MDXInstance<BlogDetails>;
And this is where the fun begins. First, we make a function that combines these two into something we can work with better.
export function extractMetadata(i: BlogAstro | BlogMdx): {
details: BlogDetails;
component: AstroComponentFactory;
dateObj: Date;
} {
if ("details" in i) {
return {
details: i.details,
component: i.default,
dateObj: parseDateString(i.details.date).dateObj,
};
}
if ("frontmatter" in i) {
return {
details: i.frontmatter,
component: i.Content,
dateObj: parseDateString(i.frontmatter.date).dateObj,
};
}
// this throw won't fire unless you ignore typescript. I like just like errors (golang arc)
throw new Error(`Input: ${i} is not a valid BlogAstro or BlogMdx`);
}
Notice the component key. This is something Vite (the import.meta.glob people) and Astro (who is made using Vite) gives us when we glob.
The i.default
, or i.Content
represent the html of the file.
If you’re copying this at home, you can look at the type definitions of MDXInstance and AstroInstance for yourself. Maybe there is other information
baked into them that you want. For us, we extract those two things and move on. The parseDateString()
is just a function that either parses the date
or throws. We have it in another function just for the throwing angle (Did I mention I like to find bugs at compile time instead of runtime?)
I’m pretty sure it’s already html at this point for astro/mdx, but in any case it’s not relevant. What’s important to us is the mdx and astro globs into two different types, so we should combine them into one type. I considered naming this type but it’s just used once. It’s an intermediate step, and a quick one, so let’s move on.
Here’s the star of the show: globBlogs()
export async function globBlogs(
limit: number | undefined,
author: PossibleAuthors | undefined,
hideHidden: boolean | undefined
): Promise<RGlobBlogs[]> {
let combined: Post[] = [];
const interim: ReturnType<typeof extractMetadata>[] = [];
const blogs = import.meta.glob<BlogAstro>("/src/blog/**/*.astro");
//^? Record<string, () => Promise<BlogAstro>>
for (const post in blogs) {
const f = await blogs[post]();
const g = extractMetadata(f);
interim.push(g);
}
const mdxs = import.meta.glob<BlogMdx>("/src/blog/**/*.mdx");
for (const post in mdxs) { // Note: "in" not "of"
const f = await mdxs[post]();
const g = extractMetadata(f);
interim.push(g);
}
interim.sort((a, b) => {
return a.dateObj.getTime() - b.dateObj.getTime();
});
The generics for .glob
are type assertions, so make sure your source of truth is… truthy. You could zod your way out of this
but to me, if you have good types and throw errors when you need to, zod is irrelevant in this situation.
The string to the .glob
needs to be a literal string - I don’t know how they know that it’s a variable but they do. It’s okay though,
because we get individual files by passing the file that we want as the index, and calling it as a function.
Then we sort! Let’s move on…
let count = 100001;
for (const p of interim) {
const id = count;
count++;
let href;
if (p.details.overrideHref) {
href = p.details.overrideHref;
} else {
href = `p/${id}`;
}
overrideHref exists in case if one of us wants to make a specific url but as easy and managed like a blog post. Not sure where that would come in. Probably something SEO heavy - like a tutorial. Not this one though.
let imgs: Image[] = [];
if (typeof p.details.image === "string") {
imgs = await globImages(
[p.details.image],
p.dateObj.getFullYear().toString(),
p.details.aria
);
}
if (Array.isArray(p.details.image)) {
imgs = await globImages(
p.details.image,
p.dateObj.getFullYear().toString(),
p.details.aria
);
}
We’ll get to globImages in a minute (there’s more globbing!), but the deal is here I feel like it’s easier to remember the syntax if we just allow a string or an array. It would be annoying if the build fails because you didn’t put redundant brackets around a singular string.
combined.push({
...p.details,
id: id,
Component: p.component,
dateObj: p.dateObj,
relativeUrl: href,
absoluteUrl: `/${p.details.author}/${href}`,
globbedImgs: imgs,
});
}
And there we have it! One, proper, Post[]. The rest of the function is sorting + filtering, but let’s show it anyways.
combined = combined.sort((a, b) => b.dateObj.getTime() - a.dateObj.getTime());
combined.map((c) => pushBlogToDb(c));
if (author) {
combined = combined.filter((c) => parseAuthorName(c.author) === author);
}
if (limit) {
combined = combined.slice(0, limit);
}
if (hideHidden) {
combined = combined.filter((c) => c.hidden !== true);
}
return combined.map((c) => {
return {
params: { post: c.relativeUrl },
props: { c },
};
});
}
Astro’s getStaticPaths()
function expects a params key and an object where the key is what you put as the file name, so for us it’s [...post].astro
, and
the value being the url relative to that file. As for the props, it means we can do cool stuff with it instead of everything being a basic string. Let’s look
at globImages()
The Vite function import.meta.glob()
is not just for this, it does a whole lot and I’m really interested in using it to convert our site from Astro to Go
when I feel comfortable doing so (the html templating just isn’t as good yet, and that’s important to me for Natalie).
export async function globImages(
imgs: string[],
year: string,
aria: BlogDetails["aria"]
): Promise<Image[]> {
const globber = import.meta.glob("/public/**/*.{jpg,gif,png,jpeg,bmp,webp}", {
as: "url",
});
let images: Image[] = [];
There is a second argument to it for options, and you can say { as: "url" }
so we don’t really care about the content of the file, but the location of it.
This function is only ever accessible through Vite, so it already knows where your root folder is and what the url would be
to get there.
Vite tangent: I’m not sure what the role of Vite is, and I don’t think I like that there’s a step in the toolchain for Vite to have a job. In Go, I’ve been using the builtin http router and building stuff with templ, which is just a templating language that compiles to Go code. It’s pretty great, and I get how it works. In this line of thought, we’re kind of at the behest of the limited documentation of this function, github issues, and blogs like these for interesting usecases and how to work around these primatives. I don’t like this. It shouldn’t have been this tricky for me to figure all of this out, but there’s too much magic fairy dust sprinkled around all of this. No, I haven’t looked in to how Vite works, or how builders in js work in general because I don’t want to have to care to learn about this. I like building stuff, and it seems like these layers make it more difficult for me to learn and iterate for myself. Because, just like getCollections(), the usecase that the library makers think of is almost never exactly what I want. So, now I have to put in a couple hours of effort to get the last 10% that they didn’t bother to create. I’d rather it just not exist and tell me how to do it, because then that code lives with me. See my abstractions essay.
let images: Image[] = [];
for (const img of imgs) {
const i = `/public/images/covers/${year}/${img}`;
const url = await globber[i]();
const fsPath = `.${i}`;
const size = fs.statSync(fsPath).size;
const ext = path.extname(fsPath);
const file = path.basename(fsPath, path.extname(fsPath));
const urlNoPublic = url.slice("/public".length);
One of the magic things that I do like, however, is the assumption that /
is the root of the project, and not the root directory of the machine. Whenever that “just works”,
I’m really happy because I can have absolute paths and not be scared. But, understandably, the fs and path packages disagree, so we put a little dot behind it because the
astro build
(probably npm run build
for you) command is always ran from the root directory.
Vite yells at me - it says “Don’t make urls in your application point to /public! Any static url is already assumed to be /public, don’t worry, we’ve already figured it out,
so go ahead and remove this irrelevant url”. But whenever I do, it doesn’t work, and I don’t understand why Vite has a problem with pointing to the /public folder. In the
astro.config.mjs, we already rename the asset folder to /a/
, so it can’t be a security thing.
if (!url || !urlNoPublic) {
throw new Error(`ERROR: ${url} undefined from ${imgs}`);
}
if (!aria || !aria[img]) {
console.log(`\n=====\nNo aria for the image ${img}. Consider adding one.\n=====\n`);
}
// else {
// console.log(`aria for ${img}:\n ${JSON.stringify(aria[img])}`);
// }
const defaultAria = { [img]: { alt: "" } };
const accessibility = { ...defaultAria, ...aria }[img];
images.push({
size: formatBytes(size),
ext: ext,
url: url,
filename: file,
fullname: `${file}${ext}`,
accessibility: accessibility,
});
}
return images;
};
We have some classic errors, I was on an accessibility kick a little while ago and that console.log used to throw an error. I’m not sure if I should make it throw. If you want more context with the codebase (such as the formatBytes() function that was 100% written by ChatGPT), you can look at the github repo, and if we’ve already rewritten the site by the time you get there, you can just look at the commit logs for January 2024.
I’m not sure how informative this was on how to properly abuse import.meta.glob()
but I hope it helps. Here’s an example [...post].astro
if you’re really stuck.
---
export const getStaticPaths: GetStaticPaths = async () => {
const g = await globBlogs(undefined, "nathan", false);
// console.log(g)
return g;
};
const props = Astro.props as RGlobBlogs["props"];
---
<NathanLayout details={props.c}>
<props.c.Component />
</NathanLayout>
Here, we get that params
object, and we filtered for just nathan, there’s no limit for how many routes we want to make, and we’re not hidding hidden because even hidden
posts should have a url to them. The <props.c.Component/>
ends up getting put in the <slot/>
of the layout from @layouts/nathan/Root.astro
.
I hope this helps someone. Thanks for reading.
Ok something a bit more interesting, and which is a bit more topological in nature.
5. Show that if a space deformation retracts to a point , then for each neighborhood of , a neighborhood of such that the inclusion map is nullhomotopic.
Strap in, cause this is gonna be a long one. We’re gonna start off with an idea from point set topology, namely, if a space is contractible is null homotopic.
Let us define a set of maps be a deformation retraction. Such that . We shall take for face value the result of question 4 even if we didn’t prove it, we can define a map such that . Now, if we have a map such that and and let U be a neighbourhood of x. Since is continuous in , and for . Now let and construct a normalised coordinate, , and define , .
Gosh that was a headache, but rephrasing all that in terms of means that since is a deformation retraction in the strong sense, implies that is a deformation retraction in the weak sense from to . So the equivalent is a homotopy equivalence and (ie null homotopic). Whooooooo!! Theres a diagram below for easier visualisation.
Now, since is null-homotopic and is homotopic to via , this implies that by multiplying the homotopies is the group operation of (Is the fundamental group, which will be covered in chaper 1, idk how else I should say this, so leave me a reply when that becomes a thing), we get that is null-homotopic.
And that’s all folks. I am now the eeper.
Thursday was my first proper maintenance day for this site. Not much for new stuff,
design, or anything that people might notice. Just things that make it easier for me to
iterate on the site. Importantly, now the site’s urls are consistent and stable! I was
kind of scared of Natalie linking to a specific post for the past couple weeks because of
this. Also, I added alt text to images in a pretty cool way. Check out the repo if you
want, but the overview is that there’s an object in consts.ts
and it maps a filename to
an object that has the accessibility stuff as the value. So, in glob.ts
, when we grab
the images using import.meta.glob
, it tacks on this accessibility object to the image.
Also, comments coming soon. RSS a little later. Astro is not very interested in allowing
md/mdx/astro files all go into an RSS feed at once.
Here’s a question: “How do I store username and password information securely in a database and give the client a JWT or session token validating their session?“.
This quesiton seems impossible to answer. Even a high level overview, I can’t find. Every tutorial I see is incomplete, uses deprecated the one deprecated JWT library, completely skips over some vital step, doesn’t set up a web server for the JWT to show it actually being used, I saw on that literally hardcoded the user/pass into the JWT creation part. Why would not mention how to store the someone being authed with a cookie or something? It’s like these people are playing a game of telephone from other tutorials, or they cut out the useful bits because they themselves are not sure if it’s the right thing to do and they don’t want to have a security breach on their conscious.
I think I’ll write some tutorials. I’ve been wanting to do some kind of content in general
(kind of like this blog!). I really want to do a writeup on how I used import.meta.glob
to generate the sites here, because it’s 100% a hack and not the way either Astro or Vite
is documented. I also want to show off my
no-magic-stack at some point. In that
stack, I want to make a few sample web apps just to learn. It’s been so much fun figuring
out different things in Go, and I feel like I’m learning software fundamentals by using
it. After I get auth + todo app working, I’m going to make a twitch chat clone (websockets
seem really cool!), and I have a couple more ideas for after that: something to do with
wasm (I would need ideas on something sufficiently difficult for wasm to shine), and a
service status page. Part of this is to learn, part is to show off a cool stack for others
to replicate, and part is because I really want a twitch chat clone that uses
htmx/go/sqlc/tailwind to exist. I also want to flesh out thing like testing, stress
testing (as in requests/sec), and token-based api auth. Maybe you, Natalie, can make some
tutorial or documentation regarding the Latex stuff you had to go through recently (unless
it actually was documented - I just know you were having issues).
Also, Natalie, you asked what I would want to do while you’re over here in the summer. I know around this time I’m going to meet up with our internet friends because they will be just a couple hours away - so if I can make it so you can come to this that would be cool. I will get one thing out of the way, please don’t expect me to be as good of a tour guide as you were when I visited you. Some things are easier over here, I think, because of the driving angle. But I don’t really have anything in particular I want to achieve, except maybe to…
I hope this answers your question :)
Back at it again with the gooberism and this time it’s equivalence relations!!! Apparenty mardown has native maths support so I’m going to see how thhat works. Feels like we’re back in algebra but it’s just making sure you paid attention to the definition of a homotopy. But lest I forget, the actual question:
3. (a) Show that the composition of homotopy equivalences and is a homotopy equivalence . Deduce that homotopy equivalence is an equivalence relation.
A map is called a homotopy equivalence if there is a map such that and . And another (I said it was just memory recall) two maps are homotopic if there exists a homotopy connecting to . Here, one writes .
In this question we are asked to prove if homotopy is transitive, which then implies it’s an equivalnce relation as is a homotopy equivalence, and X is homotopy equivalent iff Y is also to X.
To that end, suppose that are topological spaces and that , are homotopy equivalences with homotopy inverses , respectively. By definition, then, , and , . It suffices to show that the map is a homotopy equivalence, which is immediate due to the fact that it’s continuous (the composition of continuous maps is continuous) and that is a map for which
Which gives us whhat we wished for.
(b). Show that the relation of homotopy among maps is an equivalence relation.
For it to a complete equivalence relation we also need to show the reflexive and symmetric properties, however, these are very easy. As are homotopic maps under a homotopy , then by identity and by the inverse . So we again need to show transitivity.
Suppose that is a homotopy connecting to and that is a homotopy connecting to , where . Define, then, a map which takes on the value for and the value for . Continuity of is immediate due to the fact that is made continuous at (the only point of concern), whereby it follows that .
I’m not doing c, quote laziness.
I was looking through some stuff and found an interesting resource, there was an article I read in which the following image was in and it shows an interesting breakdown of linux usage by country, in summary most were about 10% with some interesting stand out countries.
These twp seem like an interesting case of linux adoption in the PC market, on the one hand (in my limited view at least) they both seem stereotypicaly quite technologically futuristic (Japan on this one mainly). However their adoption of Linux usage seems to be fairly limited. I can posit a couple reasons for this:
Which brings me onto the next strange country, whhich seems to only have half the rate of linux users of other developed countries. As linux was developed with english in mind it’s interesting that adoption is not as common as in other places.
Yes I will work on more Hatcher stuff soon, (I have one in the works but putting it on here is a bit of a pain atm cause I haven’t automated it yet and I also just haven’t finished it yet).
Also, 4chan is so cucked lmao, i have a bookmark that i usually use that goes straight to the /tg/ catalog but i wanted to change it be the homepage and there was a fricking cloudflare ddos captcha on it, which
But on a more normal note, hi Nate, in one of your earlier posts you mentioned talking to each other through these posts and I thought that was really cute, I wasn’t sure if I wished to do that through my maths stuff, so this is here for now. Question of the post! What should we do in the US while I’m over there in the summer? It’ll be a while and I know you’ll be doing work, but it would be really nice to meet some friends while over there (on the east coast at least).
Also, gotta get a working rss feed at some point. (For those that don’t use them, they’re really damn useful, I might do a write up about them one day)
I have this recurring nightmare where I make a website, ship it, forget about it for a decade, and then I need to do a decade’s worth of catching up to ever work on it again. This is my personal problem with abstractions. They are really, really great for most software developers who work in companies shipping products. You can rely on there being a software team for the next decade to maintain, and if necessary rewrite, the website so work can continue. Developer experience is important for this reason. If you ship products quicker, your company will be more successful than the competitor that ships products slower (even if theirs is marginally better/faster/more sustainable). Just for clarity, when I’m thinking about this example, I’m referring to NextJS/React/RSC/that whole paradigm. And this isn’t much of an original thought, it’s an argument that makes sense from Theo. In an age where compute and data storage are so incredibly cheap, why bother? Especially before you have properly figured out your problem space, you need to pick technologies that let you quickly add features and rewrite ones that you inevitably mess up.
But what if we had both? What if the ramp-up time for something like the no-magic-stack that I’m building is just the same as something like Remix, NextJS, or SolidJS? What if there were something using go that had a shockingly similar amount of DX? What if you didn’t need 300 node modules to get a todo app running? And really interesting to me, what if your entire website were one single binary that you can ship anywhere (including static files). I don’t know if that binary would be any better than traditional deployments, but it sounds fun. Speaking of deployments, what if they were as easy to make cheap as NextJS and AstroJS? With those, you just use the primatives that they give you, and because the hosting platforms know how the frameworks are designed, it’s plug and play.
Yes, using htmx and go is a different mental model than React, but for me at least, it just clicks. I’m sure other people have the same thing with Server Components, or React Router, or whatever else. This is what makes sense to me, personally. So, when I say how great and easy it is, I need to keep in mind my bias. I’m still a beginner to programming, and I very much remember not that long ago watching tutorial after tutorial of someone who thinks they know what to say to an inexperienced person, but is blinded by the fact that they are not themselves inexperienced anymore. The most I can do is contribute to the conversation with what makes sense to me, personally.
So, what makes sense to me? Well, I’ve struck my personal balance of comfort with abstractions. Go is the middle ground, I feel like it being a fantastic http server language is obvious.
Sqlc is barely an abstraction - you port in your database schema in SQL, you write your
queries in SQL, and it parses these two to output typesafe Go code. If you look at the
.go
files it generates, it’s not using its own library code for these queries. It uses
the builtin database/sql
for the connections and requests to the database url. It uses
github.com/lib/pq
for the Postgres part of the database and github.com/google/uuid
for
UUID types. I’m sure there are others, but this is a sense of how extremely typical and
unobtrusive this generated code is.
Tailwind literally is CSS - even if it uses some JS to generate it, and the standalone binary literally uses nodejs, I don’t care because what comes out is a blank, normal CSS file. It’s difficult to call tailwind an abstraction, it’s more like they just did the work of making useful classes for you. All the program does is add in those classes selectively based on what you’re using so that way you don’t ship a 190kB file to every single user.
The most library-code of the stack is between a-h/templ
and htmx. templ is probably the
more difficult one to justify. You can think of templ as React functional components, but
aren’t very capable of running Go code within them. It’s really just html templating but
the primitives of actually using it are so much better than anything else. It’s a function
that returns html, accepts and gives LSP support for structs/maps/slices/strings/whatever.
You can’t put that complicated of logic inside of the templ components, but this is fine.
Some might go as far as to say that logic doesn’t belong at all in components that return
html.
And that brings us to htmx. This library I know the least about, is the most “library” kind of piece, and is probably the most powerful out of all of these. Sqlc, tailwind, and templ, all aren’t that different from what they’re supporting (you write sql for sqlc, you still need to know css for tailwind, and templ is just html+go templating), all require very little new knowledge to learn the library, and provide type safety. Htmx doesn’t do any of these. Although there is an htmx-go library that provides server-side typesafety, it would only be useful if it also provided typesafety and LSP support from the client (.templ files) and the server. I believe this is impossible even in theory unless one side (client/server) changes their syntax to allow for static analysis.
Sure, htmx is easy to learn from 20% -> 80%. But first understanding how a server responds with html, what that looks like, the swapping strategies, and what the hell is ajax and why does the documentation mention it all the time?. It sounds so painfully obvious now, but there was a few weeks were I would watch “let’s build a crud app” tutorials and be more confused than before. It’s not really until I built the todo app myself did I really understand what htmx did for me and how to use it. Again, I know it very little because there actually kind of is a lot to learn with it to get really good. But, htmx seems really cool because it allows me to write so much less code for client-side interactivity. It allows me to not care about the client much at all because the server renders the html and htmx sends that html to the dom. Also importantly, it’s one javascript file, of which I can host myself and keep forever. There will be this version of htmx available for free on the internet forever. I could see myself giving up templ because it’s an abstraction that is relatively unstable and I’m sure I could figure out the builtin templating if I cared enough to - but htmx solves so many problems and I assume it’s extremely stable. By allowing me to send html to the client and have the client know what to do with it, htmx is what allows me to build this at all. Because this is the same problem that React tries to solve - and it does it well.
But, I want to care about performance, and I want to not have my recurring nightmare. This stack seems to be comprised of parts that will exist in 10 years. Go will exist in 10 years. SQL will exist in 10 years, and even though the sqlc parser might not update with my database of choice (for now, postgres), by using Sqlc I am getting the SQL muscles I need to learn. CSS will exist in 10 years, and I’m sure the tailwind classes will still work, and I will bet that Tailwind will be updated with whatever new CSS comes out because tailwind is how most people write CSS and I don’t see that changing unless the browsers implement CSS differently. Will templ exist in 10 years? Maybe not, and that will be a shame. It’s this fact that makes me consider using go’s buitin template but the primitives are so good. Also, what I’m writing can so easily be transferred in the future - it’s just html with some for loops! Will htmx exist in 10 years? Almost certainly, and if not, I’m sure this current version of htmx will be just fine for most stuff. If it’s good enough now, it’ll be good enough in the future because AFAIK, there aren’t really any security considerations with htmx.
In the most abstract way possible, this stack is what I think the web “should be”. Say you have a SPA, using a javascript library on the client to do client-side navigation. That is no where near what the web was intended for. Putting aside the performance, SEO, and UX issues with SPAs, they are in my mind, illegal. You shouldn’t make them because you are ignoring, putting aside, and almost mocking what thousands of engineers have spent three decades curating: the web. To use javascript to treat the browser like a native application is, again, in my mind illegal. (This concept is kind of difficult to come across, so I’ve made these statements more pointy for conveyance reasons. I don’t actually think that people should be in jail for making SPAs. I do think it was a mistake.) In this stack, we go back to what the web is supposed to be not because they are arbitrary standards taken on from down high, but because these standards will continue to exist in the next 10 years.
I’m building this stack because I think it will help me rest easy.
Okaaaaaaaaaay, it’s finally time to write some math runes to the web at long last and I’ll be working through hatcher’s algebraic topology, as I ran through it a couple years ago. And as many people do, I went through it too quickly, so that I have an actual foundation in the subject I’m running through the problems, each section has a multitude of problems and I’ll try to work through every other just so I’m not working on part 0 next year.
So the first question:
1. Construct an explicit deformation retraction of the torus with one point deleted onto a graph consisting of two circles intersecting in a point, namely, longitude and meridian circles of the torus.
So the first question of the intro section, this should be pretty simple and it i̸s (kind of is). As most first questions are we are going to need a couple definitions, formost what a deformation retract is:
A deformation retract of a space X onto a subspace A is a family of maps ft : X → X,t ∈ I, such that f0 = ⊮, (the identity map), f1(X) = A, and ft|A = ⊮∀t. The family ft should be continuous in the sense that the associated map X × I → X, (x,t)ft(x), is continuous.
The first trick is to use the following construction where, the torus S1 ×S1 can be obtained by identifying opposite ends of the square, which can be seen here.
Let I = [−1,1] be an interval on ℝ2. Then I2 is a square. Without the loss of genererality we choose the origin to be the deleted point. So now we need to construct a map f on I2 ∖{0} which takes a lot of fiddling but you can find it to be:
Working from the requirements that f0(x,y) = (x,y) and f1(x,y) = as the latter is an element of ∂I2 and to make sure the restriction (ft(x,y)|∂I2 = (x,y)) holds is because of max{|x|,|y|} = 1. I hope continuity should be easy to see. □
Alright one down, ... a ton to go, see you all tomorrow.
Nvm here’s another one cause its basically the same 2. Construct an explicit deformation retraction of ℝn ∖{0} onto Sn−1.
If you remember some linear algebra a vector in ℝn ∖{0}, is the normalised vector inside of Sn−1. So we just need the map to be a normalisation process which is continuous. and is much like the previous question but over n variables. ie
Just to check our bases the function ft is continuous for each t ∈ I, f0(x) = x, f1(x = x∕||x||, and ft(x)|Sn−1 = x due to the fact that ||x|| = 1 for all x. □
And if you thought I was doing these in order you’re out of luck bucko, I might make an index page that links to each question at some point, but that doesn’t exist yet so :3.
20. Show that the subspace X ⊂ ℝ3 formed by a Klein bottle intersecting itself in a circle as shown in Figure 1 is homotopy equivalent to S1 ∨ S1 ∨ S2.
Let X be he figure shown, intersecting itself at a circle C. The main
I’ve been programming as a hobby for about two years now. I started with Python, moved on to Typescript through React, and since then I haven’t quite moved on as much I would have liked to. The unfortunate reality is that the tooling around web-dev server-side javascript is really, really nice. I’m currently fleshing out some examples in my upcoming project no-magic-stack - a stack using Go’s builtin http router, htmx (super cool, but also popular right now which makes it less cool), tailwind, sqlc (really cool), and templ (really really cool). I have been implementing things that you get for free in javascript land for weeks now and it’s not easy. I think it’s worth it, and once I get up and running on the stack I have some ideas on cool stuff to do, but the tooling just isn’t there. This is important to me because as much as I hate the frontend - I really like making websites. It’s the thing that you are able to show other people with a link, something everyone can understand, you think that what you’re making (once finished) will be, if not a big hit, something that you can hang your hat on. Of course there is pride to be had in every corner of software engineering, and of course it’s vein to care as much as I do about showing off something to my friends. But dang it, I like showing my friends cool projects… even if I haven’t finished one yet.
Sometimes, through the couple years I’ve been doing this, I will have spent 10 hours in a
day on a project. Today was one of those days and I’m really happy with it. Not all of
those 10 were equally productive - much of it was fighting CSS to get the text to wrap
around the images like Natalie wanted it to (it was float-left
the whole time!). But I
made a few really solid improvements to the site and to Natalie’s ease of use in making
posts as well as customizing it in the future. I think the types I’ve set up for
everything is stable, and the code blocks are stable, and the images are stable, I’m
really happy with how this has turned out so far.
One of the reasons I haven’t finished a project is because I don’t know how to get off the ground with frontend design. Everything I make is really cool, but when I go to show it to someone, they recoil with just how god awful the css is. So, I try and try to make it look good, not just for them but also for me so I can call it “complete”. Give it the gold star it deserves and move on. Granted, this has only been two projects, but one of them I’ve rewritten at least three times so that’s got to count for something. Natalie brought this superpower to this project where she has a proper vision of what the site should look and feel like, and she did the first 80% of it. I did some more tricky css bits, but it was her vision the whole way through. I don’t know how to make a design that has personality - I copy what I can from flowbite and hope that it doesn’t look like something out of the chatbot.
One really cool idea she had was if there is an image to a post, it is not in the body of the post but rather in the attachments at the beginning of the post. This is extremely similar to image boards but ours is cooler because we can link multiple images and say within the body “Image 2 is a picture of a dolphin”. I like this because it keeps the body unobstructed. To this end, I have attached a screenshot that is the website in its current form. (Edit: I added it and I hated seeing it on the site. Some kind of recursion creepiness combined with it being a lil pretentious. If you want to see it, look a the git history here )
Up next to the site is hooking up a proper database so we don’t have to do this manual file nonsense. Doing it this way seems very prone to error because how will we know if a link is 404’d forever because of some silly move or overwrite or something? I expect this site to exist ten years from now, so I want to act that way. I also want to act in such a way that I don’t have to maintain it for ten years but for some reason that seems unlikely (I have not ever finished a project, remember?).
Also I need to figure out how time works. We make these posts, and the time part is
optional. So, if we have a time, display that. If there’s no time, js will default it to
12:00AM which is logical. So, if you don’t see a time in the future that’s why. But then
we have to figure out how time zones work. When you run toLocalString()
, it says it does
it in local time - but what the heck does that mean? We’re on serverless, and we
statically generate the html. I know there’s a way to include timezone as well in the
creation of the date object, so again I might do it I might not. For future reference, for
the time being, I’m in EST.
And, we made all of these changes without sacrificing client-side js. There still isn’t any anywhere (check your network tab !) and if there is, we’ll make a little thingy somewhere about it. I see this as a challenging limitation, as well as a good principle to follow. I think having no client js is worth some amount of sacrifice, just probably not to the degree as Natalie.
Oh, also, I had this idea for Natalie. So the vlogbrothers (hank and john green) do videos to each other on their youtube directed at each other. It started as an exercise back in 2007 or whatever because they felt like it would be a cool project to make them closer as brothers, and I thought that it might be cute for us to do that here. Not every post, but maybe once a week, either of us writes a letter to the other about whatever. And maybe a rule is that we can’t talk about the post except for within our reply (just an idea, that would be impossible to follow through with).
I’ve wanted to make a personal website for awhile, but didn’t have any good reason to. Sure I could post myself, but really the only thing I could think of would be a portfolio site. And given that I’m not particularly interested in getting a web-dev job anytime soon, and also that any web-dev jobs I could ever dream of getting probably wouldn’t be impressed by whatever I’m able to copypaste from Flowbite - that wasn’t interesting. But we had an idea, I said to Natalie that I should make a blog site for us and here it is.
I’m Nathan - and on this side of the site I’m really excited to post about things I’m learning about. Right now that’s web-dev, golang, and (soon) networking. I’m probably not going to post too personal of things at least until I’m comfortable with the potential of my internet and irl identiies colliding - but Natalie is perfectly free to post whatever she wants.
The site is built with AstroJS - something that’s essentially a build tool for html templating. We’re not using any ui libraries (react, vue, svelte, etc), just plain html. Html templating is really all we need, and if I want to do some client-side interactivity then I’ll throw on HTMX and play around with that. Natalie really cares about not having client-side JS, so I’m going to put a flag somewhere whenever I do decide to make client-side JS.
I did want to use something like Go to do the whole SSG situation, but I didn’t want to have Natalie learn any go or templ (a really cool html templating library for go), and even so I wouldn’t know how to do all of the cool SSG stuff that AstroJS comes out of the box with. I have heard of Hugo, and even though that has more stars on Github, I just think that adopting AstroJS and leaving AstroJS will be easier than Hugo. Also, I already know about Astro, the DX is absolutely insane how good it is. And that DX really matters to me because Natalie’s experience in making her posts and her css and whatever it is that she wants is important! I want her to have fun, and the higher the learning curve the less likely she is to have fun.
This said, I very much plan on moving off of Astro to something that is decades-long stable as soon as possible. I really want to have an in-browser post creator that uploads to a database that we host, instead of making .astro or .mdx files manually. Doing it this way makes sense for what I wanted (get a site up to production as quickly as possible so Natalie stays excited about the project) - but it’s not what I want for the long term. I’d like to have a router written in some language (it really doesn’t matter, I’m just on a Go hype train right now), dependent only the standard library (something with Go is known for!). This does, however, come at the cost of instead of caring about abstractions and third-party libraries - I have to care about and maintain my own code. There’s definitely a situation in which this does not payoff. But I think if I write it well enough and document it well enough then I could make a solution that lasts decades. Or maybe that’s a pipedream.
But yeah, next step - separate the data (these posts) and the view because the fact that they’re together is going to make more work during the eventual migration. But first first, Natalie needs to write some css…
I have my tisms as well, and to be fair I’ve never made a website, so I couldn’t tell you the first thing about this stuff. However I have messed around with html and css and it doesn’t seem like it would need as much fenangaling as astro to ‘just work’.
But regardless, this is very cute, and when I can figure out how to use css I can’t wait to do stuff with astro. The idea of render time JS is interesting. It doesn’t negate the possibility of doing fingerprinting and other shenanigens during render time, which is my main gripe with the language and how it’s abused. For example this startpage that I use for my browser and it kinda just works.
#!/bin/bash
export PS1=''
UB_PID=10
UB_SOCKET=""
pkill -x "ueberzugpp" || true
UB_PID_FILE="/tmp/.$(uuidgen)"
ueberzugpp layer --no-stdin --silent --use-escape-codes --pid-file "$UB_PID_FILE"
UB_PID=$(cat "$UB_PID_FILE")
export UB_SOCKET="/tmp/ueberzugpp-$UB_PID.socket"
CACHE=/tmp/albumcover
while (true) do
if [ -e /tmp/albumflag ]; then
rm /tmp/albumflag
#SONG=$(cmus-remote -Q | sed -n '/^file/s/^file \(.*\)$/\1/p')
SONG=~/Music/"$(mpc --format %file% current)"
ffmpegthumbnailer -i "$SONG" -o "$CACHE" -s 500 -q 10
ueberzugpp cmd -s "$UB_SOCKET" -a add -i PREVIEW -x 0 -y 0 --max-width 200 --max-height 200 -f "$CACHE"
clear
#exiftool -Lyrics "$SONG" | sed -e 's/\.\.\+/\n/g' -e 's/\./\.\n/g'
fi
sleep 1
done