There is a stereotype that low-code platforms create software that is slow and limited in functionality. As with any stereotype, there is some truth to that, but not much.

To find out what the reality is, at first, we need to start with how high-performance systems are built using general-purpose programming languages. Let’s get to the bottom of this and then confront it with what low-code has to offer. Is custom programming really such a great advantage compared to low-code solutions? Or is it a drawback?

What makes software “slow”?

What does “slow” really mean in terms of a typical enterprise application? By “typical” I mean a 3-layer classic: Web UI + service backend + databases. “Slow” can come from:

  • Database queries being far from optimal – missing indexes, outdated statistics, bad join strategies etc.
  • Unnecessary queries – missing caches or “chatty” algorithms assuming that fetching more data from database or external services is free and instantaneous.
  • Wrong tools for the job – such as using ORM layers and general-purpose languages for bulk operations on TB-sized databases (loading, reporting, integrating, etc..). Although it is both doable and easy, it’s definitely not the way to go.
  • Reckless algorithms – I’ve seen “solutions” with lots of loops inside loops and unnecessary recursion for trivial problems, where a simple and efficient algorithm could be used instead. In other words – for small test data, they worked, but with larger test data, the processing time quickly grew to horrifying figures.
  • The list goes on…

Where does high performance come from?

Certainly, the best way to fix performance problems is to avoid them from the start. To do so, the developers need to know their tools and plan for performance as well as scalability. It takes depth and breadth of knowledge in different technologies to make a conscious decision on which of them to use and how. Furthermore, the key to performance often lies in knowing how modern computer hardware really works. It is often based on the ability to leverage multiple CPU cores, avoid locking and unnecessary memory allocations, so that context switches are kept at minimum and CPU cache hits kept at maximum.

The second component of success is extensive performance testing. I`ve seen three main obstacles here:

  • lack of good mocks of external systems,
  • lack of large and complex test data,
  • lack of server environment at least vaguely mimicking the production.

To come up with mocks, developers need to know the variety of existing testing tools and have access to them or create some on their own. Collecting or generating the test data and designing the tests also takes significant time. In addition, creating the testing environment requires money to buy the hardware or lease it in the cloud – after all, optimizing for a small, poor VM can do more harm than good. Those reasons often prevent teams from doing performance tests before going into production. As a result, sooner or later performance issues pop up in production, and the developers are forced to remake the entire system’s architecture ASAP.

That of course can happen even with comprehensive planning and testing. No matter what your performance problems may be, the main step in fixing things is identifying the root cause. If the app is not carved in stone, but it is changing along with the business, it becomes even more important to be able to react quickly. In real life though, reacting quickly might not be easy for a lot of reasons, starting with developers lacking profiling experience in different technologies, and ending with the need of setting up dedicated monitoring software or scheduling rescue meetings with DB administrators to gather query statistics and access plans.

What does low-code have to offer?

As you can see, in the world of general-purpose programming, performance is a tough challenge. It requires time and money, as well as expert developers with a ton of knowledge, who are harder and harder to come by nowadays.

But how can a low-code platform address performance issues any better? Obviously, the right tooling can be of great assistance when trying to solve high-volume, complex problems.

What if our design platform had some scale-out capabilities built-in? In such case, we could take them into account while planning everything else. And to make system design even easier, imagine we had pre-built interfaces to common external systems, packaged for use, without the need to create, optimize and fix them manually!

What if the platform could provide us tools for generating test data easily? We wouldn’t be set back by a need of 100 GB of related data making sense or some not-so-trivial mocks for soap/rest/queue interfaces with fancy formatters. Being able to create them quickly would be very nice not only for performance testing, but for testing in general.

What if our design platform could identify the bottlenecks for us, all on its own?

Finally, imagine that we could do it all on a single low-code platform, without having to learn dedicated third-party integration tools! We could leverage everything that we have already learned while building our web UI and services! Also we could use the very same logic in all scenarios: batch, service, UI, rule engine, even document generation, practically everywhere!

[Sounds impossible? Not for VSoft archITekt!]

Breaking the stereotypes

Yes, archITekt platform provides answers to all these “what ifs”. Operating both in the world of batch and event-driven scenarios, its core engine also allows for web UI backend, workflow, services and more.

It works well on clusters, both on-prem and in the cloud, offering easy scalability given the right data design. It also provides extra tools and services for generating data, profiling, testing, mocking etc. All these goodies are based on single, flexible structure and mapping model, allowing us to do even the most complex changes to the data trees on the fly. All key areas are well optimized, measured and continuously analyzed at run-time.

Moreover, our platform “understands” the structure of the environment you are working in and offers help at every step. There is barely any need to type anything, even the largest data schemas are presented visually as hints, or “ghosts” as we call them. Most common mistakes are avoided with the help of the background analysis.

Real case studies

I get that it sounds unbelievable, so let me give you some hard evidence. I would like to focus on two real-life examples for the comparison.

Case 1 – DIY financial warehouse

Imagine a large financial institution with dozens of file formats transported to and from thousands of co-operating units. Some formats (and business cases described by them) are dozens of years old. Complex Web CRUDS. More and more integration to the outside world using almost all imaginable middleware. Four different database technologies. Six million business entities updated daily, nearly half a billion atomic records read.

Our competition’s approach was very traditional – ETL solution to move data in and out, Node.js app for the web, Java services for integration.

Some core logic written 3 times. Resistance against change requests, especially against integrating with other systems. High license fees and maintenance costs for the client.

Our approach was different – let’s use our low-code platform which has its roots in the ETL world and can “talk” to warehouse-like structures.

Results? Logic created once and then re-used, most CRUDs created with only a handful of drag & drops from the already known database structures.

Tightest loops and performance-critical areas pre-built, business-intensive areas available for the client to extend on their own. Performance bottlenecks identified long before escalations, sluggish execution plans served on the silver platter for fixing. Outperforming our competition by a factor of 2. Due to some clever data warehousing techniques, there was no need to keep any maintenance windows for the batch processes. TCO lower by an order of magnitude. Low-code platform wins in every aspect. Expressions on the faces of the client and, especially our competition’s salespeople – priceless 🙂

Case 2 – pandemic logistics

And now let me take you to a large and quickly evolving logistics company. A dozen million events to be joined, filtered, and aggregated live every hour, with an extra need to process both historical and hypothetical data. Every meaningful event to be processed exactly once, as it directly maps to real money being charged.

Our client had three main problems with their old, custom-made, tailored solution:

  1. Customization and deployment of more and more new product definitions
  2. Performance
  3. Need for more and more integrations with current and future systems

Our approach was to use… our low-code platform! I’ve already described many of the advantages of this approach in the previous case, and now I’m going to add some more. The ability to quickly build variants of the solution for historical and hypothetical data allowed us to discover some interesting statistics in the data, that the client was unaware of. It both carved a path for a new approach to data partitioning and allowed for some substantial savings in IO pressure on the database (databases in fact, because three different kinds were used). We used our decision engine and integrated it directly into the system, without any additional network traffic.

In the end, we were able to process traffic from a dozen production systems on a single laptop. Our client couldn`t believe it, and therefore analyzed it multiple times. The solution was giving correct results, was easily extendible and configurable via a custom-made web editors. Once again, traditional knowledge (or intuition?) failed miserably when confronted with modern solutions and hardware.

All in all

Clever low-code platforms can be used not only to allow non-programmers to build simple systems. Some can even be used for complex problems, large data volumes and heterogenous environments to provide the customer with great performance and customization options.

Paweł Marchewka
Low-code platform architect at VSoft
Let’s connect on LinkedIn