In a previous post we talked about AI supported development IDEs where we explored the idea of assistive AI’s in the development process which we want to explore a bit futher today. Because one of the possible options was to guide a 4GL IDE which is a much more fruitful ground to apply the theory.

My background is in AI and Robotics, of which designing languages and modelling systems is a core part.

When looking at what Workflow does for Apple Apps micro and macro functions, the same should be applied to IDE actions. Executing micro functionality and allowing to bundle them into a more sophisticated chain of events which can be triggered via voice commands and making them shareable is the first step. Some IDE’s have some great CRUD (Create, Read, Update, Delete) functionality but why stop on those specific actions where there are so many hidden gems that could be crafted into a beautiful necklace … or a workflow ?
The next step is to describe those blocks in a way that first of all describes their functionality, which makes it accessible to humans via regular search engines but what you are really doing there is creating a higher level language – words, sentences, concepts. Teaching an assistive system what a more sophisticated building block is about, will allow you to then interact with that system – whether it is via traditional search or actually NLP talking to an AI or Machine Learned System. Remember, systems learn via data point, training and feedback. If this system is not embedded into your local offline IDE, which is a bit pre-historic anyway in our always online society, then a multitude of trainers (developers) will add knowledge to the system, new event chains and they will add feedback and evaluate actions.

The AI of this system in phase one is not very sophisticated yet does it already have a lot of potential; what’s being created here is more of a specialised social and content network around a specific development framework. But it provides a lot of data, actions and event chains that can then be analysed and utilized for further training purposes. In phase one this creates a language, action sets, a community that adds value to each others work which is connected through this bot type knowledge network. What it will enable is less development effort per function due to the high degree of interaction and gradually increasing amount of reusable components. The beauty is that the cloud (brain) will hold this trained knowledge and will soon be able to start making suggestions – based on what you are doing, the way you name your variables and data structures, it will ask you what you are trying to achieve or be even more specific: Are you creating a shopping cart ? I have shopping carts – what do you need it to do ? Look at this. AI supported I ntellisense on a more abstract level than just simple autocomplete.

It won’t be long then until this can be turned into: Alexa, build me an App that integrates with Facebook for authentication. Base color corporate green. Which corporation ? Jade Eli. The credentials are core corporate… This won’t make developers superfluous for a long time – but it will dramatically increase their efficiency which goes way beyond modelling as a form of abstraction. When you create a model – you create the basis for a new language and an opportunity for further automation. This is one more step. Stay tuned for more exploration and follow us on LinkedIn, Twitter or Facebook.