I wish Dijkstra was alive and on Twitter. Quotes like this are perfect for the place: "The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offense."
I wish Dijkstra was alive and on Twitter. Quotes like this are perfect for the place: "The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offense."
Hello world
I recently read the article Every programming language needs its killer app to succeed and think the article makes a great point. It made a lot of sense to me. However, the term "killer app" seems to not really work out with the examples. Only Ruby lists Ruby on Rails as its killer app, but that's basically it.
Instead, I think it's about having a killer domain. So, going through the examples from the original article, here is my take:
I was just now listening to an argument that got me more and more interested as the argument went on. The author did a great job of first taking the opposite position, then building a case for their side, and then making a strong counterargument.
Essentially, the argument went something like this:
The Wi-Fi connection is unreliable.
Some people in the household say that the problem is that the router is too old.
However, after testing and reading reviews, it seems that the router is not the problem.
Therefore, the problem must be that the government is corrupt since they send dangerous radio waves to our house.
I have spent quite some hours building open source software. With that, a lot of things have gotten easier over time. For example, setting up tests, CI, documentation, and websites has gotten much easier. However, interface design somehow not. I already asked about it 3 years ago. If anything, it has gotten worse since I nowadays realize more how important it is to get the design right. Especially once libraries become more and more used, there is a real cost to introduce breaking changes.
Recently, I was thinking again about where AI is going. I was wondering what I as a developer should, or should not, be building. I wrote a post about my thoughts and concluded two things:
Firstly, cloud-based AI gets much cheaper every year, namely about 90% cheaper. If your AI application costs $1 per day to run now, next year it'll cost 10 cents, and just a penny the year after that. And while the price drops, the models keep getting better.
Secondly, the best AI tools don't necessarily come from the big technology companies. Take Cursor AI for example. Microsoft had everything needed to make the best AI code editor - the most GPUs, the most popular code editor (Visual Studio), and years of AI experience. But a small startup built Cursor, which many developers now prefer. The same happened with DeepSeek. Google, Meta, Microsoft, and Amazon are all spending billions on developing the best models. But DeepSeek came out of nowhere and delivered great results. This isn't new. The same thing happened with Google in the early 2000s. Altavista was the biggest search engine until Google, a small newcomer, made something better.
Now that DeepSeek released their new model, I'm thinking again about where AI is heading. For some technologies such as electric cars or batteries, I think I have a reasonable idea of where they are heading thanks to learning curves. Due to price decreases, we will probably see more electric cars, drones, ships, and trucks. Oh, and of course stoves that can boil water in 40 seconds. For AI, I'm not sure. In this blog post, I'll try to estimate the learning curve for AI and see if I can use that to make predictions.
When generating a website (typically HTML and CSS files), it is often useful to have a live reload feature. This means that your browser will automatically reload the page when you make changes to the files via code. For example, say you write some code that generates a plot on a webpage, or that generates some WebAssembly module that is embedded in the page. In the past, I would use tools like webpack or try to manually establish a socket on the server and inject JavaScript in the page.
I recently found a much simpler solution.
Just use Bash together with any server that can serve static files and injects live-reloading like live-server
.
One of my favorite things to do is to automate as much as possible. So when I started writing blog posts, I thought it would be a good idea to run the code in my blog posts automatically. For example, I would add blogs with cool code and visualizations and then run this code upon each push to the repository via CI. I even made a package for it called PlutoStaticHTML.jl.
What PlutoStaticHTML.jl allows you to do is to write your blog posts in Pluto notebooks (Pluto.jl is like Jupyter notebooks but for the Julia language). Then, you can setup CI such that the code will be executed each time you push to the repository and the output will be embedded in the blog post.
A few months ago, I spent some time on trying to predict the changes in batteries over time. My aim was to estimate when electric cars would finally be as cheap or cheaper than internal combustion engine cars. However, my approach in that blog post was extremely naive since I didn't know about learning curves. Simply put, learning curves are a real world phenomenon where a good or service becomes consistently cheaper over time. As shown in the learning curve link, solar modules have gotten 99.6% cheaper since 1976. From $100 to about $0.3, that's almost 4 orders of magnitude!