Scala icon and code

Why I believe in Scala

Scala’s history first dates back to 2003, with the 2.0 version then being released in 2006. 15 years later, Scala 3 enters the stage.

I’ve been using Scala on and off since 2014, but have followed the language and ecosystem thoroughly. This post contains mostly biased and subjective opinions about why I, personally, think Scala is one of the best general-purpose programming languages with a bright future.

Functional programming (FP) has been on the rise in the last decade, as can be seen from the following shifts within the industry:

  • Imperative languages now adopt functional paradigms (e.g. Java’s functional interfaces or recent immutable record types)
  • There’s a community effort to bring pure functional libraries to mainstream languages, like fp-ts (Typescript), Arrow (Kotlin) and Vavr (Java)
  • UI libraries like React recommend encoding components as pure functions, with clever ways of how to store state in an immutable manner (Redux)
  • There’s more demand for functional developers in Haskell or Clojure
  • New functional languages have been introduced – Roc, Flix, Unison, Nix
  • The FP community is increasingly louder: there are many high-quality conferences, video tutorials and blogs trying to spread the word about FP.

Still, object oriented and imperative programming remains the current mainstream. It’s what most of the current generation is used to reading and writing. I recommend watching Richard Feldman’s talk, which highlights how we ended up here.

So where does Scala fit into this current programming world? The pitch on Scala’s web landing page goes as follows…

Scala combines object-oriented and functional programming in one concise, high-level language. Scala’s static types help avoid bugs in complex applications, and its JVM and JavaScript runtimes let you build high-performance systems with easy access to huge ecosystems of libraries.

There are three metrics hidden in these two sentences – which are usually the most important factors when evaluating a programming language: Productivity, Quality, Performance. Let’s try to break this down to see what it entails.


Let’s take a look at it…

Learning curve

How easy is it for newcomers to pick up the language and tools and be productive?

The complexity of the Scala language, due to its many features, comes with the price of having a steep learning curve. This is not usually a problem for a skilled programmer learning the language in a systematic way. However, it is when a junior programmer is thrown into the deep end and needs to get up to speed with a project quickly. Also, Scala has a history of being alien to newcomers, with cryptic operators and error messages. Although I think this has improved thanks to raised awareness via the community. While you can still create custom operators, they are used more carefully for DSLs, where it makes sense.

Overall, I don’t put much weight in the steep-learning-curve metric. Ultimately it’s negligible when comparing the short-term low productivity issue to a developer’s long-term growth, knowledge and value.


How quickly am I able to write down my thought flow without diverging into thinking about how to do it? How quickly am I able to understand the existing code base? This is a more important question than the previous one, as code is written once, but read a hundred times. This therefore deserves further breakdown as it includes code structure, familiarity, style consistency, transparency (no voodoo macro-generated runtime proxy magic), etc.

The Scala syntax itself is not too different from commonly used languages. It may be for Java developers, but the basics are actually very close to Typescript, Kotlin or Rust. What makes it expressive – but without sacrificing readability – is that everything is an expression (as with most FP languages). “If”, “for”, “match” (switch on steroids) and even “try/catch” all return a value. On the other hand, I’m not a big infer-type-everywhere fan, especially in function signatures. Saving a few letters is not worth it in my opinion, as it greatly reduces the readability which is essential in code reviews without IDE hinting. Also, while common sequence methods (and other monadic structures) like filter, map and flatmap nicely encode the intent (and even more for-comprehension syntax sugar), more complex reduce/fold functions are usually more difficult to comprehend than their imperative (loop style) alternatives.

Scala 3 has made a big step forward with removing “implicit” keyword ambiguity, which was haunting Scala developers for years. Now it’s clear what is an implicit parameter, and what is a type class declaration (given) and its use-site (using). 

Ecosystem and tooling

Can I rely on existing frameworks and libraries to abstract away the boilerplate, provide complex functionality via simple interfaces, and allow for extension when the available features don’t satisfy my use-case? What’s the quality of available tools to automate repetitive tasks, provide assistance, quick documentation and analysis of code structure, navigate through the code, automate refactoring, support bug tracking, profiling and fast builds and enable feedback?

Scala projects tend to be composed of libraries and not rely on all-encompassing frameworks like Spring (even though effect systems like ZIO and companion libraries can have this viral property protruding from every corner of the program). The number of libraries is way lower than for e.g. NPM, but I developed more confidence in the quality. The pitch from NPM’s landing page used to go something like: “Imagine what you can do with 800,000 packages”. But it didn’t say anything about how many of these are abandoned, buggy, insecure or lacking features. Scala can interoperate with Java libraries – which usually works without issues, although it may not be as seamless as with Kotlin.

I’m an IntelliJ IDEA user. Even though Scala 3 support is not very usable yet, in general I find working with JVM languages a much better experience than e.g. Typescript. The “understanding of the program” (control flow and AST analysis) and thus navigation, refactoring and content assist, feels much more precise. Having a superior type system helps here too.

Scala build times are not the best, but its tools have made great progress. SBT has improved a lot from the times when it was the only build tool option and lacked proper documentation and tutorials. Now we have Mill, scala-cli, Scala Gradle Plugin, Bloop build server and more.


In other words, how do we avoid unwanted program behavior (bugs) and system failures (crashes)?

Static type system

I don’t want to debate static vs dynamic type systems, but I think it’s quite clear that static typing and compiler checks add greatly to software quality.

Scala’s type system is advanced and it’s actually the core of the language’s complexity. Things like exhaustive pattern matching, algebraic data types encodings and, most importantly, null handling (especially in Scala 3) help to write correct and well-designed code from the very beginning of the development cycle. With type inference we also have some of the benefits of dynamic typing, when type declarations don’t bloat the codebase.

I should also mention the metaprogramming capabilities of Scala (especially Scala 3). It has already spawned interesting projects like Quill for compile-time query generation. I predict we are yet to see more awesomeness when it’s fully embraced by the community.

Software architecture

The important property of a well-designed codebase is reasonable structure (modularity) and compositionality of the elements. This leads to better separation of responsibilities and thus more confidence in refactoring and avoiding regressions.

In the very essence of functional programming we have immutable data structures flowing through a sequence of functions which transform and combine the data. 

I categorized FP development into 3 levels:

  1. Using functions as first class citizens. Functions can be assigned to variables, passed as function arguments or returned from a function (higher-order functions). Functions can be composed to form processing chains. This level is achieved using e.g. Java 8 features – lambda functions, functional interfaces and streams, or using some of Javascript’s array methods or libraries like Lodash and Ramda. Using immutable data structures would also fit at this level.
  2. Understanding algebraic abstractions and their application for purposes like error handling (not necessarily tied to the terminology like Functor, Monad and Applicative). Apart from collections, we can include here Try, Option, Either and Validation data types. We can also throw in the bag type classes to achieve polymorphism as an alternative to subtyping.
  3. Writing pure functions using Effect systems and understanding how they represent computation and how they can be composed. Here comes the IO Monad and with it an exotic zoo of bifunctors, tagless final pattern, Monad transformers, Semigroups and Monoids. Category theory should probably deserve a special level.

The best thing about Scala is that you can have it all. The first two levels are directly available in the language or standard library, while for the third there are mature libraries like Cats Effect, ZIO or Monix.

While these traits help a lot with compositionality, they don’t say much about modularization. On the other hand, stateless classes are basically bags of functions with some implicit common scope (e.g. a service depending on another service and having a bunch of methods not sharing a mutable state). We can easily create modules of logically grouped functions and depend on other modules (e.g. classes expressed as constructor dependencies). We can also build hierarchies of modules with clearly defined visibilities using access modifiers.

The intriguing theme is that there are many ways to do dependency injection, which proved confusing even for seasoned programmers. (Cake, Reader Monad, constructors, help of macros?)


Another trait of high-quality software is to have a reasonable automated test suite that can further prove the correctness of the program elements beyond the type system, and catch regressions before they hit production. Code that is easy to test is considered well structured too. This is another aspect where functional programming shines. Given that a function is pure, all dependencies are expressed as function arguments and the function can be called in isolation. No need to mock and understand the internals of the function. 

In practice, as described in the section above, the functions are often methods with some implicit input (environment) in the form of class dependencies. We should however not need any special injectors or containers to mock the environment.


As another level of defense against bugs, warts and inconsistent style, there are tools that help with highlighting potential errors or bad practices, usually called linters. Scala has a number of them, e.g. scalafix, scalafmt, scapegoat. So although nothing special, the ecosystem is at least not lacking in this area. It’s worth noting though that the Scala compiler already does a lot of the work that would be on the shoulders of linters in other languages such as Typescript.

I personally like semantic indentation in Scala 3, and not only as a way to lure Pythonists, but because it makes the style guides simpler. Some rules are given and some are natural, like the number of lines per function or a block in general, as it makes it difficult to visually parse long blocks.


Scala runs primarily on the JVM. Raw performance of the bytecode execution with JIT compilation and memory management is therefore inherited and satisfying common use-cases.

Scala’s strong position is on high-throughput streaming systems, for example, event-sourced data pipeline solutions. This is mainly because of available libraries like FS2, Akka Streams and Monix with support for back-pressure (or pull based) batching, etc. Serialization libraries for JSON and GRPC also have a very good reputation, and recent updates make them one of the fastest.

Another specialty is the effect libraries Cats Effect and ZIO with their own fibers implementation (also known as green threads or coroutines) and schedulers. They are a great fit for highly concurrent applications, and come with an enormous productivity and quality boost in the form of available operators and integrations.

Having a garbage-collector based memory management model will probably never match the characteristics of manually managed programs or those written in Rust, which controls the lifetimes of objects using the borrow checker. This comes at the cost of high programming overhead though, very often spent fighting with the compiler and losing focus on the business problem.

Cold starts and memory-consumption issues can be mitigated by Graal VM native image or Scala native. We are yet to explore these options to find out how straightforward they are to implement.

People and the market

What’s not part of the Scala pitch is the people; the community – the most important ingredient. Take this with a pinch of salt as of course it won’t apply across the board, but my perception is that Scala attracts talented developers that are keen to explore, to try things outside of their comfort zone, to learn, grow and inspire. They are geeks. They want more quality and more productivity, and Scala is the place where they can flourish. They can learn new concepts and applications, which makes them better developers in general. They would not balk at devops or frontend work. They want to differentiate themselves.

Scala’s not usually a target for junior programmers as a first language. This is also visible in salary statistics, where Scala ranks highly. Are these high salaries justified? I don’t know.

At the time of writing, the market for Scala jobs looks better than ever. Of course it can’t compete with mainstream Java, Node or web space, but the trend is optimistic. And from what I can tell, it’s not Scala’s bastion Spark Data Processing pipelines anymore, but Akka, Cats and ZIO.

To be fair, the community’s history had its dark times. There are lessons to be learned about communication, empathy and people over business goals – even for the smartest among us.

Final thoughts

Is Scala a better Java? No. Kotlin proved it was able to fill that role very effectively.

Is Scala usable on all fronts? No. Web dev space hasn’t seen much adoption of Scala.js and it probably never will. For systems, you may want to go with Rust/Go.

Is Scala the language for FP purists? No. You have Haskel, F# and new adepts.

Is Scala the best educational language? No. That accolade goes to Python.

Is Scala the most popular and mainstream programming language? Certainly not.

What makes Scala special is a well-balanced blend of features that are aimed at productivity, quality and performance of software, and it’s full of people that care.

about author

Michal Kaščák

Knowledge obsessive, life positive, full-scale software developer eager to learn every day while also motivating others.


Tech corner with orange background

Angular, Gatsby, Astro? Pretty hot, huh?

Hello readers! Just as we do every month, we’ve pulled …

Hotovo tech corner

The Latest Tips, News, and New Features in the World of Web Development

Hello readers! Just as we do every month, we’ve pulled …

Asynchronous programming Hotovo tech corner

Asynchronous Programming World (Part 1)

What will this multi-episode blog post be about? In short: …