30th December 2018
As the year comes to a close, I think it's worth reflecting on some of the interesting developments over the last year that are worth re-visiting to discuss their impacts. Here are my top five ranked in order:
This makes the top spot as it's the natural upgrade path for many teams that are currently working with Java 8. This is considered to be a major release as it will receive LTS support until at least September 2022 on the AdoptOpenJDK roadmap (non-LTS releases get 6 months of support).
It should be noted that Oracle have also changed their official support cycle so that they will only support each new JDK for six months (this includes Java 11). The intention seems to be for initiatives such as AdoptOpenJDK to become the main source of support instead.
I personally see this as quite a positive step as we should hopefully see the community move away from the proprietary Oracle JDK and onto the open-source OpenJDK. At this point, OpenJDK is essentially developed in parity with Oracle JDK so there should be no technical difference (in most situations).
Undoubtedly, I would still expect to see an increase in enterprises spending money on Oracle support fees. This should be unnecessary most of the time, but enterprise teams are incredibly risk-averse and will feel there is some level of 'safety' afforded from being on the Oracle JDK (shrug).
For more information, check out the comprehensive Java Is Still Free post that covers this topic in more depth.
The new, controversial module system (the reason that Java 9 was delayed significantly).
This was controversial as it turned out that getting various JEP stakeholders to agree on the final implementation was... difficult. Despite the controversy, I think it's positive that we now have a module system that is overall simpler than the abstraction behemoth that is OSGi.
Something worth considering is that it can be used with the new
jlink tool to create minimal
runtimes that are much smaller
than bundling the complete JDK. This is awesome as we can potentially save significant bandwidth
and storage space when deploying applications to production environments.
Local variable type inference i.e. the
This was also fairly controversial for a lot of people, for different reasons. I personally view
this as a great step towards modernising Java and moving away from the relentless verbosity that
it is known for. I only wish that they had gone further and added
val (to declare an inferred
final) as well... Can't have everything I suppose!
The new Hooks proposal from the React team caused some heated discussion (resulting in a mammoth 1300+ comment RFC) back in October and edges its way into second place.
Hooks are first and foremost a way to enable code re-use in a way that was not previously possible in React. This was motivated by the current status quo:
These limitations currently mean that re-using code related to state and lifecycles is not simple. Hooks should allow developers to refactor state and lifecycles (called 'effects' in Hooks) out of components and into custom hooks which can be imported and applied on-demand. Certain types of render props/HOCs may become unnecessary, resulting in a cleaner component tree.
On the other hand, Hooks have also been somewhat controversial. The main issues generally revolve around their almost 'magical' implementation, somewhat awkward feeling API and introduction of additional complexity to React.
Whilst I agree with some of this sentiment, I am mostly looking forward to seeing where Hooks takes React. From the start, it should enable us to write terser code, but it could possibly unlock new abstractions and libraries that we can further use to our advantage.
Currently, Hooks are only available as a preview (starting from 16.7.0), but the original RFC has now been accepted and we should see a lot of new developments soon. Expect the official release of Hooks sometime later this year.
Back around March, the initial Go modules proposal was unveiled and it was certainly an interesting one. Dependency management is still an ongoing problem throughout the industry so it was cool to see the Go community try to push the envelope with a pretty innovative solution.
For a bit of backstory, Go has had a bad start with dependency management.
From inception, its native dependency tools would place all dependencies in the same global
directory, defined by a single
GOPATH environment variable. Unfortunately, whilst this approach
seemed to fit Google's monorepo ecosystem quite well, it was actually terrible for anyone else in
the community. You could very easily expect to get dependency clashes between multiple projects and
consequently this spawned a variety of user-land solutions over the years.
Some solutions included Dep and Glide, both of which mimicked package managers in the rest of the industry (Maven, NPM, Composer, etc). At one point Dep could have been deemed sufficient for the community's needs, however, the Go team decided to take it further with the modules proposal.
Only allowing minimum version requirements in
go.mod files (dependency requirements file).
This massively simplifies the dependency resolution algorithm and avoids 'dependency hell' situations where the algorithm cannot resolve an optimum configuration. There will always be an optimum configuration.
Dependencies themselves will most likely have their own
go.mod files that also state their own
minimum versions, however, the top-level
go.mod (our project/module) will have overriding
control on the required versions and exclusions.
Including versions in import declarations i.e. semantic import versions.
This forces developers to specify what major version of a dependency they are using in the code itself. For example:
import ( moduleV1 "github.com/ntsim/package/module" // Imports v1 moduleV2 "github.com/ntsim/package/module/v2" // Imports v2 )
The argument is that a major release should be considered (rightly) as an entirely new API. Consequently, the developer should explicitly declare what major version they are using in the code (and not the package manager).
Initially I was a little put-off by the longer import name, however on balance, I think the benefits seem to out-weight the extra verbosity. This alone could potentially be a major win for situations where we need to maintain backwards compatibility in our code.
Unfortunately, it does place the onus on the developer to upgrade version numbers by hand (or grep) across the entire codebase, but hopefully this shouldn't be too painful.
Personally I think there are some great ideas here and it will be very interesting to see how things play out. I would recommend that you check out the full proposal for yourself as it makes for an interesting read (as far as dependency management goes). Ongoing details about the actual implementation can also be found here.
Hopefully we will see Go modules make it to general availability (expected in Go 1.13) in 2019!
The release of Kotlin 1.3 takes fourth place as it marks the long awaited (pun intended) stable release of coroutines. Coroutines introduce a new model for asynchronous, non-blocking programming which should be simpler to work with than built-in primitives like threads or futures, and user-land reactive libraries such as Rx or Reactor.
Reactive libraries are still valuable for working with large complex streams of data (in a reactive manner), however, I think coroutines will replace their usage in a lot of situations.
As well as being simpler to work with, coroutines are considered to be lightweight threads that are essentially scheduled by the user (and not the OS). This means that they should theoretically be advantageous in terms of performance (when used correctly).
I'm personally not too familiar with coroutines currently, but I'm intending to get started with them in 2019!
Kotlin 1.3 itself adds a few token new features, but nothing too major besides coroutines. The most notable feature is the introduction of Contracts, allowing for enhanced compiler analysis. This should be particularly useful for providing even better type inference in places such as smart-casts.
In last place, I have made some room for the
from the end of November. Whilst it was definitely not a positive event, it is part of a wider issue
that is interesting to discuss.
This incident involved a relatively popular project called
event-stream being exploited by a rogue
developer to act as a trojan for malicious code. The code in question was eventually found to target
applications that would store Bitcoin wallets and attempt to extract their details discretely.
By itself, this might not have been too bad but unfortunately
event-stream was transitively
required through a bunch of larger projects making it inevitable that it would eventually be
consumed by its intended target. This target turned out to be Copay, which
confirmed that the code was released with several of its versions.
The exploit was first documented in this GitHub issue, but it flew under the radar for a few months beforehand. It eventually conspired that the project maintainer had promoted the rogue developer to a maintainer on bad judgement.
has built for itself over the years. Whilst NPM is an incredible ecosystem with a staggering buffet
of modules, it is increasingly becoming apparent that this has potentially come at the cost of
security and stability - remember
left-pad back in 2016?
I doubt this will be the last time we see such an attack, and it may not even be the first! Our dependencies may be hiding even more malicious code beneath the surface, and it is essentially impossible to audit every single module required by larger projects.
So what can we do about it?
Personally, I would like to see the following things:
event-streamshould be used as a textbook example of what happens when you don't.