Lattes I come from an engineering background, for me computers were, for a long time, calculators, a tool to perform either complex calculations, or on large data sets. Otherwise a handheld calculator – standard in my university years – would have been more than enough. Computers were expensive, data entry was a pain, and output was always on paper.
For my graduation thesis I had access to a small beauty – an 8 bit microcontroller (this was 30+ years ago), that allowed us to implement simple control algorithms for DC electrical motors. We were collecting data in real time, calculated the optimal parameters, and adjusted the speed as we wanted, regardless of external conditions. For me, data collection and storage was a supporting function – the main advantage of the computer was the ability to execute an algorithm in milliseconds and make the correct decision.
Fast-forward 10 years and working in a large bank — what I was doing was now called “information technology”, a step up from earlier “data processing”. The whole focus was on collecting data, and reporting. Large multi-million dollars projects focused on creating risk management reports, general ledger reports, etc. — what I was calling “wallpaper projects”. My typical engineer question was “now that you have all those reports, what are you going to do with them?”
Fast-forward another 20 years, and now we have the computing power and software technology to calculate complex data-intensive algorithms in less than a second – and we can slowly see a sea-change in the way banks look at the data they are collecting and the algorithms that can be applied for a deep understanding of that data.
Technologies like Hadoop and its offspring made possible the storage and processing of ultra-large data sets. Banks claimed that they were always “working with big data” since the amount of data collected was already huge by 20th century standards. I remember arguing that the bank needed to keep more than 3 months of transaction data, and that seemed a waste of “expensive storage” since transactions were kept only for fraud and errors investigation purposes. Once a lot more data is stored, algorithms can be deployed to make sense of data, going beyond traditional “reporting”. What was once called “artificial intelligence” became data mining, pattern extraction, smart classification, machine learning — even ”old”, “discarded” algorithms like neural networks were renamed “deep learning” and put to good use.
Even more, the ever-increasing computing power on smartphones allowed the move of more and more powerful algorithms next to the user, in decentralized fashion.
When I think of modern successful companies, I can think of many “algorithm companies”, where the core value the company is bringing to the economic ecosystem is a collection of smart algorithms coupled with extensive data collection and real-time actionable results.
Google, Uber, Facebook, Linkedin, Amazon, Expedia, Waze – all at the core are “algorithm companies”. This is the real disruption that @pmarca was talking about when he said that “software is eating everything” and @amcafee talking about the “second machine age”. As I said many times, this is only the beginning of the impact of new technologies on everything we know and use today.
What I have described above is the context for the growth of the startup movement misnamed “fintech”. Banks and insurance companies were very early adopters of computers so fintech is what they were doing all along – a good claim to fame, the opposite of the bogus “we were doing big data all along”. But like most early adopters, financial services organizations face the hurdle of the installed base, existing processes using today yesterday’s leading edge software and hardware. And things change so quickly that banks hit the biggest obstacle of all – the inertia of the mindset. That’s why disruptive fintech startups have more than a fighting chance – their mindset is to use the latest technologies and algorithms to challenge the incumbents. Banks still have a big moat called regulation and a much smaller one called trust. But you can see how they will lose the algorithm battle.
It seems obvious to do algorithmic lending – and no bank does it, only startups. It seems obvious to do deep learning on transactional data – and very few banks do it today, but many startups. It seems obvious to do pattern extraction on IT logs to detect internal fraud – and only startups do it today. It seems obvious to correlate physical customer presence with physical payment data – and only startups do it today. It seems obvious to perform real-time spending analysis on transaction data – and only startups do it well today. It seems obvious to use blockchain technology to enable “no trusted 3rd party needed” transactions — and only startups do it today.
The new “algorithm enabling” technology is here today, and banks could use it to fundamentally change the value proposition for their customers. The slower they move the better the chance for fintech startups to get a beachhead.
Banks can chose their own future. Exciting times ahead…
Comments are closed.