I started programming with C a long time ago, and even now, every few months, I dream of going back to those roots. It was so simple. You wrote code, you knew roughly which instructions it translated to, and there you went!
Then I try actually going through the motions of writing a production-grade application in C and I realise why I left it behind all those years ago. There's just so much stuff one has to do on one's own, with no support from the computer. So many things that one has to get just right for it to work across edge cases and in the face of adversarial users.
If I had to pick up a low-level language today, it'd likely be Ada. Similar to C, but with much more help from the compiler with all sorts of things.
When Ada was first announced, I rushed to read about it -- sounded good. But so far, never had access to it.
So, now, after a long time, Ada is starting to catch on???
When Ada was first announced, back then, my favorite language was PL/I, mostly on CP67/CMS, i.e., IBM's first effort at interactive computing with a virtual machine on an IBM 360 instruction set. Wrote a little code to illustrate digital Fourier calculations, digital filtering, and power spectral estimation (statistics from the book by Blackman and Tukey). Showed the work to a Navy guy at the JHU/APL and, thus, got "sole source" on a bid for some such software. Later wrote some more PL/I to have 'compatible' replacements for three of the routines in the IBM SSP (scientific subroutine package) -- converted 2 from O(n^2) to O(n log(n)) and the third got better numerical accuracy from some Ford and Fulkerson work. Then wrote some code for the first fleet scheduling at FedEx -- the BOD had been worried that the scheduling would be too difficult, some equity funding was at stake, and my code satisfied the BOD, opened the funding, and saved FedEx. Later wrote some code that saved a big part of IBM's AI software YES/L1. Gee, liked PL/I!
When I started on the FedEx code, was still at Georgetown (teaching computing in the business school and working in the computer center) and in my appartment. So, called the local IBM office and ordered the PL/I Reference, Program Guide, and Execution Logic manuals. Soon they arrived, for free, via a local IBM sales rep highly curious why someone would want those manuals -- sign of something big?
C was my first language and I quickly wrote my first console apps and a small game with Allegro. It feels incredibly simple in some aspects. I wouldn’t want to go back though. The build tools and managing dependencies feels outdated, somehow there is always a problem somewhere. Includes and the macro system feels crude. It’s easy to invoke undefined behavior and only realizing later because a different compiler version or flag now optimizes differently. Zig is my new C, includes a C compiler and I can just import C headers and use it without wrapper. Comptime is awesome. Build tool, dependency management and testing included. Cross compilation is easy. Just looks like a modern version of C. If you can live with a language that is still in development I would strongly suggest to take a look.
Otherwise I use Go if a GC is acceptable and I want a simple language or Rust if I really need performance and safety.
I fully understand that sentiment. For several years now, I have also felt the strong urge to develop something in pure C. My main language is C++, but I have noticed over and over again that I really enjoy using the old C libraries - the interfaces are just so simple and basic, there is no fluff. When I develop methods in pure C, I always enjoy that I can concentrate 100% on algorithmic aspects instead of architectural decisions which I only have to decide on because of the complexity of the language (C++, Rust). To me, C is so attractive because it is so powerful, yet so simple that you can hold all the language features in your head without difficulty.
I also like that C forces me to do stuff myself. It doesn't hide the magic and complexity. Also, my typical experience is that if you have to write your standard data structures on your own, you not only learn much more, but you also quickly see possibly performance improvements for your specific use case, that would have otherwise been hidden below several layers of library abstractions.
This has put me in a strange situation: everyone around me is always trying to use the latest feature of the newest C++ version, while I increasingly try to get rid of C++ features. A typical example I have encountered several times now is people using elaborate setups with std::string_view to avoid string copying, while exactly the same functionality could've been achieved by fewer code, using just a simple raw const char* pointer.
Try doing C with a garbage collector ... it's very liberating.
Do `#include <gc.h>` then just use `GC_malloc()` instead of `malloc()` and never free. And add `-lgc` to linking. It's already there on most systems these days, lots of things use it.
You can add some efficiency by `GC_free()` in cases where you're really really sure, but it's entirely optional, and adds a lot of danger. Using `GC_malloc_atomic()` also adds efficiency, especially for large objects, if you know for sure there will be no pointers in that object (e.g. a string, buffer, image etc).
There are weak pointers if you need them. And you can add finalizers for those rare cases where you need to close a file or network connection or something when an object is GCd, rather than knowing programmatically when to do it.
But simply using `GC_malloc()` instead of `malloc()` gets you a long long way.
You can also build Boehm GC as a full transparent `malloc()` replacement, and replacing `operator new()` in C++ too.
I think one of the nice things about C is that since the language was not designed to abstract e.g.: heap is that it is really easy to replace manual memory management with GC or any other approach to manage memory, because most APIs expects to be called with `malloc()` when heap allocation is needed.
I think the only other language that has a similar property is Zig.
About 16 years ago I started working with a tech company that used "C++ as C", meaning they used a C++ compiler but wrote pretty much everything in C, with the exception of using classes, but more like Python data classes, with no polymorphism or inheritance, only composition. Their classes were not to hide, but to encapsulate. Over time, some C++ features were allowed, like lambdas, but in general we wrote data classed C - and it screamed, it was so fast. We did all our own memory management, yes, using C style mallocs, and the knowledge of what all the memory was doing significantly aided our optimizations, as we targeted to be running with on cache data and code as much as possible. The results were market leading, and the company's facial recognition continually lands in the top 5 algorithms at the annual NIST FR Vendor test.
Slightly better ergonomics I suppose. Member functions versus function pointers come to mind, as do references vs pointers (so you get to use . instead of ->)
Yeah, slightly better ergonomics. Although we could, we simply did not use function pointers, we used member functions from the data class the data sat inside. We really tried to not focus on the language and tools, but to focus on the application's needs in the context of the problem it solves. Basically, treat the tech as a means to an end, not as a goal in itself.
I completely agree with this sentiment. That's why I wrote Datoviz [1] almost entirely in C. I use C++ only when necessary, such as when relying on a C++ dependency or working with slightly more complex data structures. But I love C’s simplicity. Without OOP, architectural decisions become straightforward: what data should go in my structs, and what functions do I need? That’s it.
The most inconvenient aspect for me is manual memory management, but it’s not too bad as long as you’re not dealing with text or complex data structures.
> A typical example I have encountered several times now is people using elaborate setups with std::string_view to avoid string copying, while exactly the same functionality could've been achieved by fewer code, using just a simple raw const char* pointer.
C++ can avoid string copies by passing `const string&` instead of by value. Presumably you're also passing around a subset of the string, and you're doing bounds and null checks, e.g.
string_view is just a char* + len; which is what you should be passing around anyway.
Funnily enough, the problem with string view is actually C api's, and this problem exists in C. Here's a perfect example: (I'm using fopen, but pretty much every C api has this problem).
FILE* open_file_from_substr(const char* start, int len)
{
return fopen(start);
}
void open_files()
{
const char* buf = "file1.txt file2.txt file3.txt";
for (int i = 0; i += 10; ++i) // my math might be off here, apologies
{
open_file_from_substr(buf + i, buf + i + 10); // nope.
}
}
> When I develop methods in pure C, I always enjoy that I can concentrate 100% on algorithmic aspects instead of architectural decisions which I only have to decide on because of the complexity of the language
I agree this is true when you develop _methods_, but I think this falls apart when you design programs. I find that you spend as much time thinking about memory management and pointer safety as you do algorithmic aspects, and not in a good way. Meanwhile, with C++, go and Rust, I think about lifetimes, ownership and data flow.
Variety is good. I got so used to working in pure C and older C++ that for a personal project I just started writing in C, until I realised that I don't have to consider other people and compatibility, so I had a lot of fun trying new things.
You can certainly do entirely absurd things in Perl.
But it is a lot easier / safer work with.
You get / can get a wealth of information when you
the wrong thing in Perl.
With C
segmentation fault is not always easy to pinpoint.
However the tooling for C, with sone if the IDEs
out there you can set breakpoints/ walk through
the code in a debugger, spot more errors during compile
time.
There is a debugger included with Perk but after trying
to use it a few times I have given up on it.
Give me C and Visual Studio when I need debugging.
On the positive side, shooting yourself in the foot with C
is a common occurrence.
I have never had a segmentation fault in Perl.
Nor have I had any problems managing the memory,
the garbage collector appears to work well.
(at least for my needs)
def route = fn (request) {
if (request.method == GET ||
request.method == HEAD) do
locale = "en"
slash = if Str.ends_with?(request.url, "/") do "" else "/" end
path_html = "./pages#{request.url}#{slash}index.#{locale}.html"
if File.exists?(path_html) do
show_html(path_html, request.url)
else
path_md = "./pages#{request.url}#{slash}index.#{locale}.md"
if File.exists?(path_md) do
show_md(path_md, request.url)
else
path_md = "./pages#{request.url}.#{locale}.md"
if File.exists?(path_md) do
show_md(path_md, request.url)
end
end
end
end
}
Going from mid-90s assembly to full stack dev/sec/ops, getting back to just a simple Borland editor with C or assembly code sounds like a lovely dream.
Your brain works a certain way, but you're forced to evolve into the nightmare half-done complex stacks we run these days, and it's just not the same job any more.
I sometimes write C recreationally. The real problem I have with it is that it's overly laborious for the boring parts (e.g. spelling out inductive datatypes). If you imagine that a large amount of writing a compiler (or similar) in C amounts to juggling tagged unions (allocating, pattern matching over, etc.), it's very tiring to write the same boilerplate again and again. I've considered writing a generator to alleviate much of the tedium, but haven't bothered to do it yet. I've also considered developing C projects by appealing to an embeddable language for prototyping (like Python, Lua, Scheme, etc.), and then committing the implementation to C after I'm content with it (otherwise, the burden of implementation is simply too high).
It's difficult because I do believe there's an aesthetic appeal in doing certain one-off projects in C: compiled size, speed of compilation, the sense of accomplishment, etc. but a lot of it is just tedious grunt work.
Rust evangelism is probably the worst part of Rust. Shallow comments stating Rust’s superiority read to me like somebody who wants to tell me about Jesus.
That's fair, but to me what drags C and C++ really down for me is the difficulty in building them. As I get older I just want to write the code and not mess with makefiles or CMake. I don't want starting a new project to be a "commitment" that requires me to sit down for two hours.
For me Rust isn't really competing against unchecked C. It's competing against Java and boy does the JVM suck outside of server deployments. C gets disqualified from the beginning, so what you're complaining about falls on deaf ears.
I'm personally suffering the consequences of "fast" C code every day. There are days where 30 minutes of my time are being wasted on waiting for antivirus software. Thinks that ought to take 2 seconds take 2 minutes. What's crazy is that in a world filled with C programs, you can't say with a good conscience that antivirus software is unnecessary.
Feature wise, yes. C forces you to keep a lot of irreducible complexity in your head.
> Rust has a much, much slower compiler than pretty much any language out there
True. But it doesn't matter much in my opinion. A decent PC should be able to grind any Rust project in few seconds.
> Rust applications are sometimes
Sometimes is a weasel word. C is sometimes slower than Java.
> Rust takes most people far longer to "feel" productive
C takes me more time to feel productive. I have to write code, then unit test, then property tests, then run valgrind, check ubsan is on. Make more tests. Do property testing, then fuzz testing.
Or I can write same stuff in Rust and run tests. Run miri and bigger test suite if I'm using unsafe. Maybe fuzz test.
> C takes me more time to feel productive. I have to write code, then unit test, then property tests, then run valgrind, check ubsan is on. Make more tests. Do property testing, then fuzz testing.
I remember a project that used boost for very few things, but it included a single boost header in almost every file. That one boost header absolutely inflated the build times to insane levels.
Good for you. Like the grandparent commenter said, for others these tradeoffs might be important. E.g.:
> I am disappointed with how poorly Rust's build scales, even with the incremental test-utf-8 benchmark which shouldn't be affected that much by adding unrelated files. (...)
> I decided to not port the rest of quick-lint-js to Rust. But... if build times improve significantly, I will change my mind!
Look you're picking a memory unsafe language versus a safe one. Whatever meager gains you save on compilation times (and the link shows the difference is meager if you aren't on a MacOS, which I'm not) will be obliterated by losses in figuring out which UB nasal demon was accidentally released.
This is like that argument that dynamic types save time, because you can catch error in tests. But then have to write more tests to compensate, so you lose time overall.
The best feature of C is the inconvenience of managing dependencies. This encourages a healthy mistrust of third-party code. Rust is unfortunately bundled with an excellent package manager, so it's already well on its way to NPM-style dependency hell.
(And yes, I was considering if I should shout in capslock ;) )
I have seen so many fresh starts in Rust that went great during week 1 and 2 and then they collided with the lifetime annotations and then things very quickly got very messy. Let's store a texture pointer created from an OpenGL context based on in-memory data into a HashMap...
impl<'tex,'gl,'data,'key> GlyphCache<'a> {
Yay? And then your hashmap .or_insert_with fails due to lifetime checks so you need a match on the hashmap entry and now you're doing the key search twice and performance is significantly worse than in C.
Or you need to add a library. In C that's #include and a -l linker flag. In Rust, you now need to work through this:
At least apparent complexity. See "Expert C Programming: Deep C Secrets" which creeps up on you shockingly fast because C pretends to be simple by leaving things to be undefined but in the real life things need some kind of behavior.
IMO these are the major downsides of Rust in descending order of importance:
- Project leadership being at the whims of the moderators
- Language complexity
- Openly embracing 3rd party libraries and ecosystems for pretty much anything
- Having to rely on esoteric design choices to wrestle the compiler into using specific optimizations
- The community embracing absurd design complexity like implementing features via extension traits in code sections separated from both where the feature is going to be used and where either the structs and traits are implemented
- A community of zealots
I think the upsides easily outcompete the downsides, but I'd really wish it'd resolve some of these issues...
Rust makes explicit what the C standard says you can't ignore but it's up to you and not the compiler. Rust is a simpler and easier language than C in this sense.
Not really. Rustup only ships a limited number of toolchains, with some misses that (for me) are real head-scratchers. i686-unknown-none, for example. Can't get it from rustup. I'm sure there's a way to roll your own toolchain, but Rust's docs might as well tell you to piss up a rope for how much they talk about that.
Why is this important? C is the lingua franca of digital infrastructure. Whether that's due to merit or inertia is left as an exercise for the reader. I sure hope your new project isn't meant to supplant that legacy infrastructure, 'cause if it needs to run on legacy hardware, Rust won't work.
This is an incredibly annoying constraint when you're starting a new project, and Rust won't let you because you can't target the platform you need to target. For example, I spent hours building a Rust async runtime for Zephyr, only to discover it can't run on half the platforms Zephyr supports because Rust doesn't ship support for those platforms.
That really depends what you want to do. All that security in Rust is only needed if there is a danger of hacks compromising the system.
The moment you start building something that's not exposed to the internet and hacking it has no implications, C beats it due to simplicity and speed of development .
Correctness is not just about security. And the threat environment to which a program may eventually be exposed is not always obvious up front.
Also, no: that's only true for some kinds of programs. Rust, c++, and go all have a much easier ecosystem for things like data structures and more complex libraries that make writing many programs much easier than in C.
The only place I find C still useful over one of the other three is embedded, mostly because of the ecosystem, and rust is catching up there also.
(This is somewhat ironic, because I teach a class in C. It remains a useful language when you want someone to quickly see the relationship between the line of code they wrote and the resulting assembly, but it's also fraught - undefined behavior lurks in many places and adds a lot of pain. I will one day switch the class to rust, but I inherited the C version and it takes a while.)
> much easier ecosystem for things like data structures and more complex libraries that make writing many programs much easier than in C.
So many people have implemented those data structures though, and they are available freely and openly, you can choose to your liking, i.e. ohash, or uthash, or khash, etc. and that is only for a hash table.
Those complex libraries are out there, too, for C, obviously.
The reason for why it is not in the standard library is obvious enough: there are many ways to implement those data structures, and there is no one size that fits all.
One comment talked about not using a (faster) B-Tree instead of a AVL-tree in C, because of the complexity (thus maintenance burden and risk of mistakes) it would add to the code.
It also depends on what you want to get away from.
I don't disagree that Rust might technically be a better option for a new project, but it's still a fairly fast moving language with an ecosystem that hasn't completely settled down. Many are increasingly turned off by the fast changing developer environments and ecosystems, and C provides you with a language and libraries that has already been around for decades and aren't likely to change much.
There are also so many programming concepts and ideas in Rust, which are all fine and useful in their own right, but they are a distraction if you don't need them. Some might say that you could just not use them, but they sneak up on you in third party libraries, code snippets, examples and suggestions from others.
Personally I find C a more cosy language, which is great for just enjoying programming for a bit.
C might beat Rust at simplicity and speed of development (don't know, I never developed in Rust) but I remember why I stopped developing in C about 30 years ago: the hundreds of inevitably bug ridden lines of C to build a CGI back then (malloc, free, strcpy, etc) vs little more than string slicing and "string" . "concatenation" in Perl and forget about everything else. That could have been Python (which I didn't know about,) or the languages there were born in those years: Ruby and PHP. Even Java was simpler to write. Runtime speed was seldom a problem even in the 90s. C programs are fast to run but they are not fast to develop.
So this is a journey where starting in ruby, going through an SICP phase, and then eventually compromising that it isn't viable. it kinda seems like C is just the personal compromise of trying to maintain nerdiness rather than any specific performance needs.
I think it's a pretty normal pattern I've seen (and been though) of learning-oriented development rather than thoughtful engineering.
But personally, AI coding has pushed me full circle back to ruby. Who wants to mentally interpret generated C code which could have optimisations and could also have fancy looking bugs. Why would anyone want to try disambiguating those when they could just read ruby like English?
> But personally, AI coding has pushed me full circle back to ruby.
This happened to me too. I’m using Python in a project right now purely because it’s easier for the AI to generate and easier for me to verify. AI coding saves me a lot of time, but the code is such low quality there’s no way I’d ever trust it to generate C.
It depends on what you need the code for. If it’s something mission critical, then using AI is likely going to take more time than it saves, but for a MVP or something where quality is less important than time to market, it’s a great time saver.
Also there’s often a spectrum of importance even within a project, eg maybe some internal tools aren’t so important vs a user facing thing. Complexity also varies: AI is pretty good at simple CRUD endpoints, and it’s a lot faster than me at writing HTML/CSS UI’s (ie the layout and styling, without the logic).
If you can isolate the AI code to code that doesn’t need to be high quality, and write the code that doesn’t yourself, it can be a big win. Or if you use AI for an MVP that will be incrementally replaced by higher quality code if the MVP succeeds, can be quite valuable since it allows you to test ideas quicker.
I personally find it to be a big win, even though I also spend a lot of time fighting the AI. But I wouldn’t want to build on top of AI code without cleaning it up myself.
There are also some tasks I’ve learned to just do myself: eg I do not let the AI decide my data model/database schema. Data is too important to leave it up to an AI to decide. Also outside of simple CRUD operations, it generates quite inefficient database querying so if it’s on a critical path, perhaps write the queries yourself.
I started programming with C a long time ago, and even now, every few months, I dream of going back to those roots. It was so simple. You wrote code, you knew roughly which instructions it translated to, and there you went!
Then I try actually going through the motions of writing a production-grade application in C and I realise why I left it behind all those years ago. There's just so much stuff one has to do on one's own, with no support from the computer. So many things that one has to get just right for it to work across edge cases and in the face of adversarial users.
If I had to pick up a low-level language today, it'd likely be Ada. Similar to C, but with much more help from the compiler with all sorts of things.
Don't forget Pascal is still alive.
From what I remember about Ada, it is basically Pascal for rockets.
And some call it Boomer Rust, if I recall.
When Ada was first announced, I rushed to read about it -- sounded good. But so far, never had access to it.
So, now, after a long time, Ada is starting to catch on???
When Ada was first announced, back then, my favorite language was PL/I, mostly on CP67/CMS, i.e., IBM's first effort at interactive computing with a virtual machine on an IBM 360 instruction set. Wrote a little code to illustrate digital Fourier calculations, digital filtering, and power spectral estimation (statistics from the book by Blackman and Tukey). Showed the work to a Navy guy at the JHU/APL and, thus, got "sole source" on a bid for some such software. Later wrote some more PL/I to have 'compatible' replacements for three of the routines in the IBM SSP (scientific subroutine package) -- converted 2 from O(n^2) to O(n log(n)) and the third got better numerical accuracy from some Ford and Fulkerson work. Then wrote some code for the first fleet scheduling at FedEx -- the BOD had been worried that the scheduling would be too difficult, some equity funding was at stake, and my code satisfied the BOD, opened the funding, and saved FedEx. Later wrote some code that saved a big part of IBM's AI software YES/L1. Gee, liked PL/I!
When I started on the FedEx code, was still at Georgetown (teaching computing in the business school and working in the computer center) and in my appartment. So, called the local IBM office and ordered the PL/I Reference, Program Guide, and Execution Logic manuals. Soon they arrived, for free, via a local IBM sales rep highly curious why someone would want those manuals -- sign of something big?
Now? Microsoft's .NET. On Windows, why not??
C was my first language and I quickly wrote my first console apps and a small game with Allegro. It feels incredibly simple in some aspects. I wouldn’t want to go back though. The build tools and managing dependencies feels outdated, somehow there is always a problem somewhere. Includes and the macro system feels crude. It’s easy to invoke undefined behavior and only realizing later because a different compiler version or flag now optimizes differently. Zig is my new C, includes a C compiler and I can just import C headers and use it without wrapper. Comptime is awesome. Build tool, dependency management and testing included. Cross compilation is easy. Just looks like a modern version of C. If you can live with a language that is still in development I would strongly suggest to take a look.
Otherwise I use Go if a GC is acceptable and I want a simple language or Rust if I really need performance and safety.
I fully understand that sentiment. For several years now, I have also felt the strong urge to develop something in pure C. My main language is C++, but I have noticed over and over again that I really enjoy using the old C libraries - the interfaces are just so simple and basic, there is no fluff. When I develop methods in pure C, I always enjoy that I can concentrate 100% on algorithmic aspects instead of architectural decisions which I only have to decide on because of the complexity of the language (C++, Rust). To me, C is so attractive because it is so powerful, yet so simple that you can hold all the language features in your head without difficulty.
I also like that C forces me to do stuff myself. It doesn't hide the magic and complexity. Also, my typical experience is that if you have to write your standard data structures on your own, you not only learn much more, but you also quickly see possibly performance improvements for your specific use case, that would have otherwise been hidden below several layers of library abstractions.
This has put me in a strange situation: everyone around me is always trying to use the latest feature of the newest C++ version, while I increasingly try to get rid of C++ features. A typical example I have encountered several times now is people using elaborate setups with std::string_view to avoid string copying, while exactly the same functionality could've been achieved by fewer code, using just a simple raw const char* pointer.
Try doing C with a garbage collector ... it's very liberating.
Do `#include <gc.h>` then just use `GC_malloc()` instead of `malloc()` and never free. And add `-lgc` to linking. It's already there on most systems these days, lots of things use it.
You can add some efficiency by `GC_free()` in cases where you're really really sure, but it's entirely optional, and adds a lot of danger. Using `GC_malloc_atomic()` also adds efficiency, especially for large objects, if you know for sure there will be no pointers in that object (e.g. a string, buffer, image etc).
There are weak pointers if you need them. And you can add finalizers for those rare cases where you need to close a file or network connection or something when an object is GCd, rather than knowing programmatically when to do it.
But simply using `GC_malloc()` instead of `malloc()` gets you a long long way.
You can also build Boehm GC as a full transparent `malloc()` replacement, and replacing `operator new()` in C++ too.
> Try doing C with a garbage collector ... it's very liberating.
> Do `#include <gc.h>` then just use `GC_malloc()` instead of `malloc()` and never free.
Even more liberating (and dangerous!): do not even malloc, just use variable length-arrays:
This style forces you to alloc the memory at the outermost scope where it is visible, which is a nice thing in itself (even if you use malloc).I think one of the nice things about C is that since the language was not designed to abstract e.g.: heap is that it is really easy to replace manual memory management with GC or any other approach to manage memory, because most APIs expects to be called with `malloc()` when heap allocation is needed.
I think the only other language that has a similar property is Zig.
Which GC is that you’re using in these examples?
I'm not OP but the most popular C GC is Boehm's: https://www.hboehm.info/gc/
About 16 years ago I started working with a tech company that used "C++ as C", meaning they used a C++ compiler but wrote pretty much everything in C, with the exception of using classes, but more like Python data classes, with no polymorphism or inheritance, only composition. Their classes were not to hide, but to encapsulate. Over time, some C++ features were allowed, like lambdas, but in general we wrote data classed C - and it screamed, it was so fast. We did all our own memory management, yes, using C style mallocs, and the knowledge of what all the memory was doing significantly aided our optimizations, as we targeted to be running with on cache data and code as much as possible. The results were market leading, and the company's facial recognition continually lands in the top 5 algorithms at the annual NIST FR Vendor test.
Sounds like they know what they are doing. How is using c++ with only data classes different from using c with struct
Namespaces are useful for wrapping disparate bits of C code, to get around namespace collisions during integration.
Slightly better ergonomics I suppose. Member functions versus function pointers come to mind, as do references vs pointers (so you get to use . instead of ->)
Yeah, slightly better ergonomics. Although we could, we simply did not use function pointers, we used member functions from the data class the data sat inside. We really tried to not focus on the language and tools, but to focus on the application's needs in the context of the problem it solves. Basically, treat the tech as a means to an end, not as a goal in itself.
I completely agree with this sentiment. That's why I wrote Datoviz [1] almost entirely in C. I use C++ only when necessary, such as when relying on a C++ dependency or working with slightly more complex data structures. But I love C’s simplicity. Without OOP, architectural decisions become straightforward: what data should go in my structs, and what functions do I need? That’s it.
The most inconvenient aspect for me is manual memory management, but it’s not too bad as long as you’re not dealing with text or complex data structures.
[1] https://datoviz.org/
> A typical example I have encountered several times now is people using elaborate setups with std::string_view to avoid string copying, while exactly the same functionality could've been achieved by fewer code, using just a simple raw const char* pointer.
C++ can avoid string copies by passing `const string&` instead of by value. Presumably you're also passing around a subset of the string, and you're doing bounds and null checks, e.g.
string_view is just a char* + len; which is what you should be passing around anyway.Funnily enough, the problem with string view is actually C api's, and this problem exists in C. Here's a perfect example: (I'm using fopen, but pretty much every C api has this problem).
> When I develop methods in pure C, I always enjoy that I can concentrate 100% on algorithmic aspects instead of architectural decisions which I only have to decide on because of the complexity of the languageI agree this is true when you develop _methods_, but I think this falls apart when you design programs. I find that you spend as much time thinking about memory management and pointer safety as you do algorithmic aspects, and not in a good way. Meanwhile, with C++, go and Rust, I think about lifetimes, ownership and data flow.
Variety is good. I got so used to working in pure C and older C++ that for a personal project I just started writing in C, until I realised that I don't have to consider other people and compatibility, so I had a lot of fun trying new things.
Despite what some people religiously think about programming languages, imo C was so successful because it is practical.
Yes it is unsafe and you can do absurd things. But it also doesn't get in the way of just doing what you want to do.
I don't think C was successful. It still is! What other language from the 70s is still under the top 5 languages?
https://www.tiobe.com/tiobe-index/
No, it's because of Unix and AT&T monopoly.
How was AT&T’s monopoly a driver? It’s not like they forced anyone to use UNIX.
Sounds a bit like perl but at a lower level ?
You can certainly do entirely absurd things in Perl. But it is a lot easier / safer work with. You get / can get a wealth of information when you the wrong thing in Perl.
With C segmentation fault is not always easy to pinpoint.
However the tooling for C, with sone if the IDEs out there you can set breakpoints/ walk through the code in a debugger, spot more errors during compile time.
There is a debugger included with Perk but after trying to use it a few times I have given up on it.
Give me C and Visual Studio when I need debugging.
On the positive side, shooting yourself in the foot with C is a common occurrence.
I have never had a segmentation fault in Perl. Nor have I had any problems managing the memory, the garbage collector appears to work well. (at least for my needs)
Eh Segfaults are like the easiest error to debug, they almost always tell you exactly where the problem is.
Sounds a bit like JavaScript, but at a tower level?
I wouldn’t compare them, C is very simple.
Here's what kc3 code looks like (taken from [1]):
[1] https://git.kmx.io/kc3-lang/kc3/_tree/master/httpd/page/app/...Going from mid-90s assembly to full stack dev/sec/ops, getting back to just a simple Borland editor with C or assembly code sounds like a lovely dream.
Your brain works a certain way, but you're forced to evolve into the nightmare half-done complex stacks we run these days, and it's just not the same job any more.
This reads like a cautionary tale about getting nerdsniped, without a happy ending.
C, or more precisely a constrained C++ is my go to language for side projects.
Just pick the right projects and the language shines.
I sometimes write C recreationally. The real problem I have with it is that it's overly laborious for the boring parts (e.g. spelling out inductive datatypes). If you imagine that a large amount of writing a compiler (or similar) in C amounts to juggling tagged unions (allocating, pattern matching over, etc.), it's very tiring to write the same boilerplate again and again. I've considered writing a generator to alleviate much of the tedium, but haven't bothered to do it yet. I've also considered developing C projects by appealing to an embeddable language for prototyping (like Python, Lua, Scheme, etc.), and then committing the implementation to C after I'm content with it (otherwise, the burden of implementation is simply too high).
It's difficult because I do believe there's an aesthetic appeal in doing certain one-off projects in C: compiled size, speed of compilation, the sense of accomplishment, etc. but a lot of it is just tedious grunt work.
As many people have already said, for starting a new project Rust beats C in every way
Rust is not free of trade offs and you're not helping the cause the way you think you are.
Just a few off the top:
- Rust is a much more complex language than C
- Rust has a much, much slower compiler than pretty much any language out there
- Rust takes most people far longer to "feel" productive
- Rust applications are sometimes (often?) slower than comparable C applications
- Rust applications are sometimes (often?) larger than comparable C applications
You may not value these things, or you may value other things more.
That's completely fine, but please don't pretend as if Rust makes zero trade offs in exchange for the safety that people seem to value so much.
> helping the cause
Rust evangelism is probably the worst part of Rust. Shallow comments stating Rust’s superiority read to me like somebody who wants to tell me about Jesus.
it's not unique for Rust, C/C++ devs probably aren't just used to it, since there hasn't been anything major new for decades.
If you already dislike this, I ask you to read C-evangelism with respect to the recent Linux drama about Rust in Linux.
Jesus wasn't written in Rust? Sounds like a recipe for UB if you ask me.
Not to mention, modern CPUs have essentially been designed to make C code run as fast as possible.
That's fair, but to me what drags C and C++ really down for me is the difficulty in building them. As I get older I just want to write the code and not mess with makefiles or CMake. I don't want starting a new project to be a "commitment" that requires me to sit down for two hours.
For me Rust isn't really competing against unchecked C. It's competing against Java and boy does the JVM suck outside of server deployments. C gets disqualified from the beginning, so what you're complaining about falls on deaf ears.
I'm personally suffering the consequences of "fast" C code every day. There are days where 30 minutes of my time are being wasted on waiting for antivirus software. Thinks that ought to take 2 seconds take 2 minutes. What's crazy is that in a world filled with C programs, you can't say with a good conscience that antivirus software is unnecessary.
> Rust is a much more complex language than C
Feature wise, yes. C forces you to keep a lot of irreducible complexity in your head.
> Rust has a much, much slower compiler than pretty much any language out there
True. But it doesn't matter much in my opinion. A decent PC should be able to grind any Rust project in few seconds.
> Rust applications are sometimes
Sometimes is a weasel word. C is sometimes slower than Java.
> Rust takes most people far longer to "feel" productive
C takes me more time to feel productive. I have to write code, then unit test, then property tests, then run valgrind, check ubsan is on. Make more tests. Do property testing, then fuzz testing.
Or I can write same stuff in Rust and run tests. Run miri and bigger test suite if I'm using unsafe. Maybe fuzz test.
> C takes me more time to feel productive. I have to write code, then unit test, then property tests, then run valgrind, check ubsan is on. Make more tests. Do property testing, then fuzz testing.
So … make && make check ?
"How to install and use "make" in Windows?"
https://stackoverflow.com/questions/32127524/how-to-install-...
Real projects get into the millions of lines of code, Rust will not scale to compile that quickly.
Not quickly, no. But neither does C++ (how long does it take to compile Clang?) and people manage fine.
Faster would obviously be better, but it's not big enough of a deal to cancel out all the advantages compared to C.
I remember a project that used boost for very few things, but it included a single boost header in almost every file. That one boost header absolutely inflated the build times to insane levels.
Good for you. Like the grandparent commenter said, for others these tradeoffs might be important. E.g.:
> I am disappointed with how poorly Rust's build scales, even with the incremental test-utf-8 benchmark which shouldn't be affected that much by adding unrelated files. (...)
> I decided to not port the rest of quick-lint-js to Rust. But... if build times improve significantly, I will change my mind!
https://quick-lint-js.com/blog/cpp-vs-rust-build-times/
> Good for you. > https://quick-lint-js.com/blog/cpp-vs-rust-build-times/
Look you're picking a memory unsafe language versus a safe one. Whatever meager gains you save on compilation times (and the link shows the difference is meager if you aren't on a MacOS, which I'm not) will be obliterated by losses in figuring out which UB nasal demon was accidentally released.
This is like that argument that dynamic types save time, because you can catch error in tests. But then have to write more tests to compensate, so you lose time overall.
The best feature of C is the inconvenience of managing dependencies. This encourages a healthy mistrust of third-party code. Rust is unfortunately bundled with an excellent package manager, so it's already well on its way to NPM-style dependency hell.
completely not!
(And yes, I was considering if I should shout in capslock ;) )
I have seen so many fresh starts in Rust that went great during week 1 and 2 and then they collided with the lifetime annotations and then things very quickly got very messy. Let's store a texture pointer created from an OpenGL context based on in-memory data into a HashMap...
impl<'tex,'gl,'data,'key> GlyphCache<'a> {
Yay? And then your hashmap .or_insert_with fails due to lifetime checks so you need a match on the hashmap entry and now you're doing the key search twice and performance is significantly worse than in C.
Or you need to add a library. In C that's #include and a -l linker flag. In Rust, you now need to work through this:
https://doc.rust-lang.org/cargo/reference/manifest.html
to get a valid Cargo.toml. And make sure you don't name it cargo.toml, or stuff will randomly break.
Adding `foo = "*"` to Cargo.toml is as easy as adding `-l foo` to Makefile.
You don’t need to work through that, you can follow https://doc.rust-lang.org/cargo/reference/build-script-examp... and it shows you how.
Except complexity of language
At least apparent complexity. See "Expert C Programming: Deep C Secrets" which creeps up on you shockingly fast because C pretends to be simple by leaving things to be undefined but in the real life things need some kind of behavior.
IMO these are the major downsides of Rust in descending order of importance:
- Project leadership being at the whims of the moderators
- Language complexity
- Openly embracing 3rd party libraries and ecosystems for pretty much anything
- Having to rely on esoteric design choices to wrestle the compiler into using specific optimizations
- The community embracing absurd design complexity like implementing features via extension traits in code sections separated from both where the feature is going to be used and where either the structs and traits are implemented
- A community of zealots
I think the upsides easily outcompete the downsides, but I'd really wish it'd resolve some of these issues...
I'll take good complexity over bad simplicity any day.
Rust makes explicit what the C standard says you can't ignore but it's up to you and not the compiler. Rust is a simpler and easier language than C in this sense.
You can ignore most of the complexity that's not inherent to the program you're trying to write.
The difference is C also lets you ignore the inherent complexity, and that's where bugs and vulnerabilities come from.
Rust has three major issues:
- compile times
- compile times
- compile times
Not a problem for small utilities, but once you start pulling dependencies... pain is felt.
Long compile time isn't a new issue for language with advanced features. Before Rust, it was Haskell. And before Haskell, it was C++.
And implementation wise, probably there's something to do with LLVM.
Compile time is also my top three major issues with C++, in a list that also includes memory safety.
I would use Mojo - you get the type and memory safety of Rust, the simplicity of Python and the performance of C/C++.
> simplicity of Python
Python isn’t simple, it’s a very complex language. And Mojo aims to be a superset of Python - if it’s simple, that’s only because it’s incomplete.
Not really. Rustup only ships a limited number of toolchains, with some misses that (for me) are real head-scratchers. i686-unknown-none, for example. Can't get it from rustup. I'm sure there's a way to roll your own toolchain, but Rust's docs might as well tell you to piss up a rope for how much they talk about that.
Why is this important? C is the lingua franca of digital infrastructure. Whether that's due to merit or inertia is left as an exercise for the reader. I sure hope your new project isn't meant to supplant that legacy infrastructure, 'cause if it needs to run on legacy hardware, Rust won't work.
This is an incredibly annoying constraint when you're starting a new project, and Rust won't let you because you can't target the platform you need to target. For example, I spent hours building a Rust async runtime for Zephyr, only to discover it can't run on half the platforms Zephyr supports because Rust doesn't ship support for those platforms.
That really depends what you want to do. All that security in Rust is only needed if there is a danger of hacks compromising the system.
The moment you start building something that's not exposed to the internet and hacking it has no implications, C beats it due to simplicity and speed of development .
Correctness is not just about security. And the threat environment to which a program may eventually be exposed is not always obvious up front.
Also, no: that's only true for some kinds of programs. Rust, c++, and go all have a much easier ecosystem for things like data structures and more complex libraries that make writing many programs much easier than in C.
The only place I find C still useful over one of the other three is embedded, mostly because of the ecosystem, and rust is catching up there also.
(This is somewhat ironic, because I teach a class in C. It remains a useful language when you want someone to quickly see the relationship between the line of code they wrote and the resulting assembly, but it's also fraught - undefined behavior lurks in many places and adds a lot of pain. I will one day switch the class to rust, but I inherited the C version and it takes a while.)
> much easier ecosystem for things like data structures and more complex libraries that make writing many programs much easier than in C.
So many people have implemented those data structures though, and they are available freely and openly, you can choose to your liking, i.e. ohash, or uthash, or khash, etc. and that is only for a hash table.
Those complex libraries are out there, too, for C, obviously.
The reason for why it is not in the standard library is obvious enough: there are many ways to implement those data structures, and there is no one size that fits all.
I followed the discussion about Rust in Linux.
One comment talked about not using a (faster) B-Tree instead of a AVL-tree in C, because of the complexity (thus maintenance burden and risk of mistakes) it would add to the code.
They were happy to use a B-Tree in Rust though
It also depends on what you want to get away from.
I don't disagree that Rust might technically be a better option for a new project, but it's still a fairly fast moving language with an ecosystem that hasn't completely settled down. Many are increasingly turned off by the fast changing developer environments and ecosystems, and C provides you with a language and libraries that has already been around for decades and aren't likely to change much.
There are also so many programming concepts and ideas in Rust, which are all fine and useful in their own right, but they are a distraction if you don't need them. Some might say that you could just not use them, but they sneak up on you in third party libraries, code snippets, examples and suggestions from others.
Personally I find C a more cosy language, which is great for just enjoying programming for a bit.
C might beat Rust at simplicity and speed of development (don't know, I never developed in Rust) but I remember why I stopped developing in C about 30 years ago: the hundreds of inevitably bug ridden lines of C to build a CGI back then (malloc, free, strcpy, etc) vs little more than string slicing and "string" . "concatenation" in Perl and forget about everything else. That could have been Python (which I didn't know about,) or the languages there were born in those years: Ruby and PHP. Even Java was simpler to write. Runtime speed was seldom a problem even in the 90s. C programs are fast to run but they are not fast to develop.
> All that security in Rust is only needed if there is a danger of hacks compromising the system.
Rust's safety features help prevent a large class of bugs. Security issues are only one kind of bug.
> C beats it due to simplicity and speed of development
C being faster to develop than Rust is a ludicrous claim.
I've read through your website and thinking processes.
Your work is genius! I hope KC3 can be adopted widely, there is great potential.
The author's github profile: https://github.com/thodg
The way he writes about his work in this article, I think he's a true master. Very impressive to see people with such passion and skill.
Try zig, it is C with a bit of polish.
Why zig and not Rust? Just to throw the question out there :-)
504 Gateway Timeout
Archived at https://archive.is/zIZ8S
So this is a journey where starting in ruby, going through an SICP phase, and then eventually compromising that it isn't viable. it kinda seems like C is just the personal compromise of trying to maintain nerdiness rather than any specific performance needs.
I think it's a pretty normal pattern I've seen (and been though) of learning-oriented development rather than thoughtful engineering.
But personally, AI coding has pushed me full circle back to ruby. Who wants to mentally interpret generated C code which could have optimisations and could also have fancy looking bugs. Why would anyone want to try disambiguating those when they could just read ruby like English?
> But personally, AI coding has pushed me full circle back to ruby.
This happened to me too. I’m using Python in a project right now purely because it’s easier for the AI to generate and easier for me to verify. AI coding saves me a lot of time, but the code is such low quality there’s no way I’d ever trust it to generate C.
> AI coding saves me a lot of time, but the code is such low quality
Given that low quality code is perhaps the biggest time-sink relating to our work, I'm struggling to reconcile these statements?
It depends on what you need the code for. If it’s something mission critical, then using AI is likely going to take more time than it saves, but for a MVP or something where quality is less important than time to market, it’s a great time saver.
Also there’s often a spectrum of importance even within a project, eg maybe some internal tools aren’t so important vs a user facing thing. Complexity also varies: AI is pretty good at simple CRUD endpoints, and it’s a lot faster than me at writing HTML/CSS UI’s (ie the layout and styling, without the logic).
If you can isolate the AI code to code that doesn’t need to be high quality, and write the code that doesn’t yourself, it can be a big win. Or if you use AI for an MVP that will be incrementally replaced by higher quality code if the MVP succeeds, can be quite valuable since it allows you to test ideas quicker.
I personally find it to be a big win, even though I also spend a lot of time fighting the AI. But I wouldn’t want to build on top of AI code without cleaning it up myself.
There are also some tasks I’ve learned to just do myself: eg I do not let the AI decide my data model/database schema. Data is too important to leave it up to an AI to decide. Also outside of simple CRUD operations, it generates quite inefficient database querying so if it’s on a critical path, perhaps write the queries yourself.
The point is that much of the defensive programming you would have to do in C is unnecessary and automatic in Rust.
There's much more to defensive programming than avoiding double frees and overflows.
Yeah and Rust enables much more defensive programming than just avoiding double frees and overflows.
much != all
Maybe the moral here is learning Lisp made him a better C programmer.
Could he have jumped right into C and had amazing results, if not for the Journey learning Lisp and changing how he thought of programming.
Maybe learning Lisp is how to learn to program. Then other languages become better by virtue of how someone structures the logic.
Rust