Saturday, April 25, 2020

Garbage Collectors

Garbage collection is a concept that, although it is fundamental for every programming language, is not taught as such. It’s also  very simple (and maybe obvious): if you’re not using a resource, don’t hog it and let someone else use it. But at least in my experience, although I’ve heard the term since I first began studying computer science, I never really understood or paid much attention to it. Even today, even though I know what a garbage collector is and why it’s needed, I still don’t fully understand how they work, and I haven’t seen any more than the basics in my classes. 

That being said, I think Alexander Yakushev did a great job explaining how different algorithms go about collecting garbage, and I think more importantly, he made me realize that garbage collectors are more than simply processes that run behind the scenes to make my life easier while coding. In his case, working at Grammarly, it’s very important that memory is freed efficiently, since their plugin has to be working non-stop while a user is typing. That being said, improving garbage collection algorithms can greatly improve performance in any device, and is something every future (and current) computer scientist should be familiar with. 

Yakushev also hit on a very important point that I’ve heard across all areas of computer science I’ve explored: when it comes to software engineering, there is no silver bullet. It is not enough to know what your programs do; you need to know how they work, how they’re made, as well as their limitations, in order to make an informed decision and choose (or create) the right garbage collector for your program, and this applies (or should apply) to everything else that goes into your program, as it’s details like these that really separate a computer scientist from someone who just learned to program online. 

Saturday, April 18, 2020

The Roots of Lisp

In the last two weeks, we listened to two different podcasts: one by Dick Gabriel and the other by Rick Hickey, both talking about the wonders of Lisp and Clojure, respectively. Both of those podcasts mentioned that one of the main draws of Lisp and other functional languages could interpret themselves, but I honestly hadn’t quite grasped the concept of how that was done. I haven’t read John McCarty’s original paper, so I don’t know how much of the praise should go to him, but I think Paul Graham did a fantastic job explaining how the seven primitive operators can form the basis for such a complete language.

Graham also mentioned that he saw Lisp and similar languages becoming the main model used by programmers in “the future”. And this was written in 2002, which is a very long time ago looking into how old computers are. And at least today, functional languages have found a home in many different applications, and with JavaScript being as popular as it is, more people are bound to discover functional programming and keep using it, at least once in a while. I think all of this speaks to the possibility of Lisp building on the popularity it has gotten recently, and becoming as mainstream as other languages we know today. 

I don’t know whether this is exclusive to Lisp or not, but I think its ability to be as powerful as it is while having such simple axioms will prove very useful for its longevity, as it means that “legacy code” won’t really be a problem, and it could make the code easier to understand, which is important in order to keep newcomers interested in using Lisp in one of their projects. I think that with all of their features, functional languages stand a real chance of being used alongside procedural languages for big projects and applications, but I don’t see them overtaking the languages most people work with in the near future. 

Saturday, April 4, 2020

The Rise of Clojure

This podcast answered many of the doubts I had after listening to last week’s podcast on Lisp. First of all, although I haven’t done much in Clojure, so far it hasn’t been a hard language to get a good grasp on, contrary to what Dick Gabriel said on the previous podcast. Perhaps its because I’ve only been programming for a few years, and I haven’t yet found a language that I consider better than the rest, so the process of adapting and changing my mindset wasn’t as hard as he had made it out to be. I think Rich Hickey is much more on the nose as to why Lisp hadn’t taken off in popularity: it wasn’t easy to use with everything else. 

With Clojure, since it runs on the Java Virtual Machine, it’s way easier for someone to just try it for a small part of what they’re working on, and they’re more likely to fall in love with how it works, or simply understand that functional programming has its uses and keep it in mind for future projects. I think Hickey did a very good job identifying the reasons Lisp wasn’t as successful as some people believed it would be, and he did a really great job fixing it and making it more accessible. 

Clojure was scary to get started on, because its lack of “structure” makes it difficult to read to someone who has never seen it. At least that’s how I felt when I started this course, and I don’t imagine I would’ve ever tried it out for a project on my own, since it’s usually easier to stick to what you’re familiar with. However, after having learned its basics, I will definitely consider functional programming in my future endeavors, and I think the same applies to most people who have given any Lisp languages a try, a number that will only get bigger as languages like Clojure keep popping up and make this way of thinking more mainstream.