10 mins with dividiti

Interview with Anton Lokhmotov, CEO of dividiti

1. In a nutshell, can you describe what dividiti is?

dividiti is a Cambridge-based startup on an exciting mission to enable efficient, reliable and cheap computing everywhere. And we do mean everywhere: from tiny computers embedded in “things” (IoT) to compact personal computers to enormous supercomputers, as increasing the efficiency and decreasing the cost of computing is critical to innovation and wellbeing.

2. What is your startup story i.e. how was the company born?

dividiti grew out of a collaboration between myself and Grigori Fursin. When we first met in 2007, I was finishing my PhD on compiler optimisation at the University of Cambridge; Grigori had already been well known for his pioneering work on using machine learning techniques in compilation which stemmed from his PhD at the University of Edinburgh and continued in the EU-funded MILEPOST project with IBM and ARC.

Fast forward to 2012, I was leading a 10 person team for the ARM Mali GPU Compute compilers (OpenCL, RenderScript), constantly on the lookout for disruptive approaches to compilation and programming. Grigori had just finished a stint as the head of the program optimisation group at the Intel Exascale Lab and returned to his senior tenured position at INRIA in France. Through the lenses of his unique R&D experience, Grigori realised that the many outstanding problems in computer systems — such as the ever growing space of design and optimisation choices, lack of representative workloads, lack of common methodology and tools, and so on — can only be practically solved using a community-driven, collaborative approach.

It is only very recently that reproducibility and rigour in computer systems’ R&D have started receiving the attention they deserve. For example, in his researcher’s role Grigori was strongly advised to focus on publishing novel ideas, while actively discouraged from “wasting” any time on implementing robust software infrastructure. He, however, persevered, and implemented Collective Knowledge (CK) — an open framework, repository and methodology for reproducible and collaborative R&D (http://cknowledge.org). CK allows the community to share representative programs, data sets, tools and predictive models as reusable components with a unified API, crowdsource and reproduce experiments across diverse hardware platforms provided by volunteers, and apply predictive analytics to optimise computer systems. This may not sound a big deal but in our view it’s absolutely crucial for “seeing further by standing on the shoulders of giants”, that is building upon past research and making actual progress.

The wide-reaching potential of CK to spur the design of next generation high performance and energy efficient computer systems encouraged us to leave permanent employment and start dividiti together in early 2015.

3. In terms of funding – can you tell us how you sought investment?

The initial development of CK was supported by a €50K grant from the EU FP7 609491 TETRACOM Coordination Action (http://tetracom.eu). This EU funding, with minimal bureaucracy and legal constraints, helped us immediately validate CK in a pilot project with ARM. Using CK, ARM was able to obtain valuable insights into performance of its products in a fraction of the time required by conventional analysis. This engagement confirmed that we were on the right track.

We did not seek any further investment, as we decided to grow organically depending on demand for our services. We can say that our funding primarily comes from visionaries at leading tech companies, who see that in, say, 5 years time there won’t be any other way to design computer systems than by leveraging community effort. To cross the chasm and reach the majority, we are focussing on building a strong community and reference customer base.

4.  What prompted the need for dividiti? What markets are expected to benefit the most from it?

The need for optimising computer systems is becoming even more pronounced as novel, compute-intensive algorithms are required to be developed and deployed.

For example, the automotive industry is racing towards providing autonomous driving capabilities by a self-imposed deadline of 2020. Researchers in machine vision and machine learning are being lured in their hundreds to join well-funded R&D labs. Most of them, however, only have experience of developing algorithms that work on powerful workstations or in the cloud. But eventually their algorithms will need to be deployed inside cars, on resource- and power-constrained embedded systems with no guaranteed connectivity to offload computations to the cloud. Quite clearly, taking a system that consumes a couple of hundred Watts of power and costs a couple of thousand dollars and putting it inside a car is not going to be viable. Based on our experience with CK-driven optimisation, we conjecture that the same requirements for speed and safety of processing can be met with a system that consumes perhaps under ten Watts of power and costs perhaps under a hundred dollars. Now, how can that not be an attractive value proposition?

But didn’t we say that our aim is to optimise computing everywhere? So while the automotive and robotics markets look very attractive, we are also working on optimising high-performance computing, deep learning and on our favourite topic of making compilers smarter.

5. What role do you play within the IoT sector and can you offer any insight into this growing trend?

Our CK technology can also be used to optimise solutions for IoT, making them cheaper, faster, smaller, more energy efficient and more reliable.

Talking of the need for smarter compilers, for example, most software developers mistakenly believe that to get the best possible code out of their compiler they simply need to specify the “best optimisation” flag (e.g. “gcc -O3”). Unfortunately, compilers are very complex pieces of software. Compiler developers typically “tune” optimisation on a small set of programs (benchmarks). Consequently, it is often possible to find a combination of compiler flags that makes the compiler generate code that is several times faster and considerably smaller than with the “best optimisation” flag.

Our CK technology can automatically and continuously search for better flag combinations (across different compiler versions, hardware variants, etc.), and thus optimise software in the most economic of ways, that is without changing a single line of code! This is particularly useful when the software is being actively developed, and may dramatically cut the cost and time-to-market for new products.

But where it gets really interesting is when you consider optimisation at runtime — depending on the system state, input data, resource availability, and so on. “Total optimisation” based on aggregating results from many millions of IoT devices in a public repository (e.g. http://cknowledge.org/repo) and using automatic optimisation tools is what might ultimately embody our vision of efficient, reliable and cheap computing everywhere.

6. What might a typical day be like for you?

Ah, the beauty of it all is that there’s no such a thing (as a typical day). In the morning, we might make a conference call briefing a customer on our progress, and then collaborate on optimising an open-source library by running large scale experiments across a dozen of machines (using Collective Knowledge of course). In the afternoon, we might apply leading edge predictive analytics techniques to extract insights from the experimental data, and then take some time to read about recent developments and think strategically how to help advance our long-term mission. Of course, we do have our own crunch moments and deadlines but they are often self-imposed — to keep stretching ourselves to achieve more and enjoy startup life better.

7.  Can you offer any insight into the Cambridge incubator scene i.e. your connections with the University?

I am privileged to be an alumnus of the Computer Laboratory, with its vast network and amazing successes (which most recently included DeepMind, Raspberry Pi, SwiftKey and Unikernel Systems). My office at ideaSpace West is only two minutes away from the Lab, so it’s wonderful to be able to stay in touch and even plan some future research together.

While a PhD student 10 years ago, I remember attending numerous events on entrepreneurship. It was then when I first thought about joining a startup on graduation or starting my own company. So I am very grateful to the University for instilling this confidence in me, while also giving me enough wisdom not to start too soon.

8. Can you offer any advice for entrepreneurs looking to set up a company?

Don’t rush to start a company — do gain some experience first working for an established and well-run company. Learn to commit, learn to deliver. Learn to dream, learn to inspire. Never stop learning.

Also, while revenue can be a good indicator of commercial success, it can also be misleading. Looking back since we started, we have definitely made some mistakes, but we have made right steps too. Waiting for revenue has been nerve-racking at times (and might still be for some time!), but there’s nothing more satisfying than changing the world in a way you care about.

9. What have you got planned for the company over the next few years?

We only started a year ago, but roughly we are hoping to double the headcount and quadruple the revenue every year. In addition to providing services, we will be developing licensable IP and building platforms for growing niches like deep learning and IoT. (Job enquiries are always welcome!) Importantly, we estimate that we will help our clients dramatically optimise their products and processes, and thus save them millions within 1-2 years and tens to hundreds of millions within 5 years.

On another front, we are crusading for reproducible and collaborative R&D in computer systems. For example, for the past two years Grigori initiated and co-chaired artifact evaluation for PPoPP and CGO, the leading ACM conferences on parallel programming and compilers, and is one of few Europeans invited to join the ACM Task Force on Reproducibility. We are offering our “crown jewels”, the Collective Knowledge framework, under a permissive license, to ensure its wide adoption and impact in academia, as well as in industry.
Quote from Blue Brothers

We feel a bit like… the Blues Brothers from the 1980 movie. They were getting their band together to save their orphanage with infectious belief: “We’re on a mission from God” (hence can’t be stopped). We are getting a “band” together too: all across the largely divided hardware/software and industry/academia communities. We envision that effective knowledge sharing and open innovation will enable new exciting applications in consumer electronics, robotics, automotive and healthcare — at better quality, lower cost and faster time-to-market.

Author notes

Anton Lokhmotov - Founder of dividiti

Anton Lokhmotov has been working in the area of programming languages and tools for 15 years, both as a researcher and engineer, primarily focusing on efficiency, portability and productivity of programming techniques for heterogeneous computer systems. Prior to co-founding dividiti in 2015, for 5 years Anton led development of GPU Compute programming technologies for the ARM Mali GPUs, including production (OpenCL, RenderScript) and research (EU-funded project “CARP”) compilers. He was actively involved in championing technology transfer, engaging with partners and customers, and contributing to open-source projects and standardization efforts.

In 2008-2009, he worked as a post-doctoral research associate at Imperial College London. Anton obtained a PhD in Computer Science from the University of Cambridge in 2008, and an MSc in Applied Mathematics and Physics from the Moscow Institute for Physics and Technology in 2004.