Ask HN: Old Unix technologies like cron and SSE in modern workflows?
I've been in the industry for over a decade and find myself repeatedly returning to battle-tested UNIX technologies despite the constant stream of new frameworks and tools.
Cron jobs still power critical scheduled tasks across our infrastructure. Server Sent Events handle real-time updates more efficiently than many WebSocket implementations I've tried. Shell scripts glue together components that modern architectures claim to integrate seamlessly.
I'm curious how others are leveraging these older technologies in modern stacks. Have you found creative uses for old UNIX tools that outperform newer alternatives? Are there specific combinations of old + new that work particularly well? Any war stories where a simple UNIX solution saved the day when modern tools failed?
I am a young developer (22y) but find me almost all the time using "old" frameworks / programs form the unix world. i find most of the modern tools just overwhelming and hard to debug due to the increased complexity. or me, Unix is like a construction kit made up of individual components that simply work, Do One Thing and Do It Well - this always ensure that you can just clue that stuff together.
I am always amazed that many of these things (Unix, C, IP, Ethernet) were developed in the 70s/80s and are still relevant, useful and today. At that time, people also had a technical interest in developing software and had to deal with very limited resources, maybe this is no longer the case in many cases today
i am helping maintain a big automation system for a customer which is generates several million in sales per year and is built just with simple unix tools, perl, mqtt, nginx - a very low tech stack - running on a small 2 core machine.
the cloud-based solution that existed before had enormous performance problems, ran unreliably, was difficult to debug and was several times more expensive in terms of both development and running costs
I cannot say it is battle tested since I use it for personal use only. libvirt with serial console for VMs and netboot for Linux installation (I am not fully netbooting, since I am use netboot.xyz ISO since it is faster than provisioning proper DHCP+TFTP server or setting up a netboot in VM's UEFI). It makes everything much much faster and more comfortable, almost every Linux distro installer works properly over serial (except RH, CentOS and Fedora which say that few options are unavailable in TUI mode). The only think that is needed to be done to make it works is just to provide a kernel option for serial output (console=ttyS0,115200n81).
When I was using RPi more intensively I was using serial (UART) connection as well. No need to switch keyboard, mouse and monitor in case of failed boot or broken network connection. I was just opening serial console on my desktop and possibility of having terminal with Linux on RPi and web browser on one screen, as well as possibility of copying text between two windows was making Linux boot debugging more comfortable.
cron jobs are great
tmux + vim > ide (for me)
ssh > rdp
git
terminal
This stuff is so old now, and largely does one thing and does it well. I prefer composing unix tools to writing my own scripts, when it's possible. When I do write my own scripts, often they need to be run from a cron job, soooooo...
<3 old tech
Tmux(2007) and Git(2005) are not so old, especially comparing to last Unix workstation which were released at a similar time :D
SSH - great tool, even Microsoft adopted it
I use SSE to send web users to another page (redirection header) while still processing other long running stuff for them. The result: non-blocking web access, users dont have to wait for all stuff processed to see a confirmation message e.g: a user signs up, you can show them their account has been created instantly while still creating relevant services (e.g: a virtual server) for their account.