AI: not so much the what, but the why/how?

Oct. 22, 2018

Earlier this week, I attended the MCubed conference in London – an event designed to bring together experts in artificial intelligence, machine learning and data science - to speak about deploying AI and deep learning at the edge of the network in defense. In contrast to GTC last week, this conference was full of data scientists and policy makers trying to get to grips with the ethical implications of AI in a societal context. There was also a lot of new technology being proposed to address some of these challenges.

If a network can be trained to recognize a stimulus - how do we build confidence for the end user so that we can explain how the network learned and came to the correct conclusion? As an example: if we train a network to classify dogs and huskies - if all the huskies in our training data are photographed in a snowy environment, how do we know the network is not actually classifying the snow present in the background? Methods for testing these ‘black box’ networks are needed to build assurance and create explanations about how classifications are made.

It is impossible to explain how deep learning networks function and, as humans, our training data can be riddled with biases we may not be aware of that AI networks can latch onto. With AI affecting all aspects of society, do we need a framework to ensure rigorous processes are in place to build confidence and, ultimately, trust?

The drive towards fully autonomous vehicles is perhaps a good example of how regulation can help to deliver more trustworthy platforms. When incidents (accidents and corner cases) do occur, explanations for why they occurred are identified very quickly and hopefully resolved. Do we need more regulation in data science, medicine, smart cites and so on? Increasing rigor and deepening our understanding of how our AI is making the decisions it makes is becoming ever more critical in applications that affect our daily lives.

Life and death

AI is increasingly being used to make life and death decisions: think medical prognosis, transportation, aviation, smart cities…. The list goes on, and so we need to better understand how our models function, and apply rigorous development processes to control not only our source code, but also our models and annotated dataset/s. If we have complete traceability when things break, we can provide explanations about the failure, address the issue and improve.

If you’re working in AI and the software industry, the pace of development is fast - so how do you build good process and practice when accelerating your code with AI? Continuous integration and delivery of well tested code is a certainly a challenge. Automated testing, AI validation and end-to-end authentication need to be rigorous - and it was great at MCubed to see how companies are stepping up to answer these questions. My eyes were opened, and it was good to see new tools are being developed to assist DevOps to deliver applications and services at high velocity.

MCubed is a great conference for data scientists and DevOps, and I now have a ton of stuff I need to go out and investigate further - new tools and new practices that will ultimately contribute to building better systems we can all be confident in.

About the Author

Ross Newman | Field Application Eng

With a degree in software engineering, Ross is a field applications engineer, based in our Towcester office and supporting Abaco customers throughout EMEA. He has worked extensively in the defense industry with companies including BAE Systems and Lockheed Martin. Ross enjoys travel and robotics, and for the last three years has taught coding to young children at a local school as part of a national network of Code Clubs (codeclub.org.uk).

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!