I’ve been to a number of conferences recently for CTOs and high level tech people. They are a lot of fun, and often full of both future tech and future thinking on tech, which leads to long discussions.
One recent conference I attended asked a lot of questions around the role of the CTO in the future. Given the increasingly complex technologies that a CTO has to be aware of and utilise, there are definite questions around roles and responsibilities of the CTO within the organisation.
There is a trend in modern businesses that says that if something goes wrong with a part of the business, then the most senior board member responsible for that part should resign/be sacked. I understand the point (“This happened on my watch”) but I am more and more concerned that we have reached a point with tech that this might not be the case any more.
Tech people love new technologies.
Tech people love trying things out.
Tech people don’t always think about the consequences of their actions either for the business or wider community.
And maybe we should.
There are numerous scenarios where a tech idea in wartime has proved to be a valuable addition to peacetime in some way.
Unfortunately the same is true the other way.
And now that wars now are more focussed on individuals and technology we end up with a scenario where technology can have a huge impact on the world usually as an unintended consequence of some clever people saying “wouldn’t it be great if…”.
The CTO is ultimately responsible?
Given there is a massive rise in things like Machine Learning algorithms and data storage and analytics engines, as well as real time tech communications platforms that we cannot interrogate due to encryption, the leadership of a tech organisation could well become a scary place to be.
What happens if “our tech” is used for unintended purposes and people die?
Who’s responsibility is that?
What happens if a piece of our tech is used inside something we cannot envisage and for some reason it breaks, causing untold harm?
Is that our problem?
What if our algorithm is the thing that generates a machine that has learned something important and turning it off would cause harm as the data would be lost.
Are we killing something?
Who has the ultimate responsibility?
The CTO as Philosopher
The role of the CTO is moving to becoming part techie and part philosopher I believe. Too many techies have spent too long thinking that this kind of thing is someone else’s problem.
I believe that the CTO community will have to adjust it’s approach and become a lot more savvy over the next few years while we work out exactly what responsibility we have.
And it may be worth starting to study the history and philosophy of digital technology as an interesting subset of history and philosophy of science.
This is going to be an interesting decade.