Managing AI Disruption : NASSCOM Annual Tech Conference 2017

Posted on Jan 09, 2018

Managing AI Disruption : NASSCOM Annual Tech Conference 2017


NodeXperts was part of the National Annual Technology Conference 2017. The NATC is organized in New Delhi/NCR each year by NASSCOM, the apex association for the Indian IT Industry.

The conference aims to bring together Indian IT professionals from all over the country and drive the conversation forward on the latest technology trends. Like last year, the core theme for the conference this year was Disruption. That’s right, Disruption with a capital D! In an industry characterized by quickly evolving paradigms – Disruption is not only inevitable, but often the driving force behind tech revolution.



This time around, the conference talked about a very specific kind of Disruption – the one brought in by the advent of Artificial Intelligence (AI) into mainstream tech. AI was the buzzword of the day and most of the talks were centered around it.

We joined the conference looking to learn more about Artificial Intelligence, how the giants in the Indian tech industry are using it, as well as aiming to network with people.



The Conference venue was The Leela Ambience Hotel in Gurugram. We arrived there at 9 AM to find the lobby full of people conversing over tea, coffee and cookies. Pretty soon, it was time to start the day’s agenda and we were ushered into the hall where talks were to take place.

Since the past year, AI has been both hailed as a boon and criticized as a bane. While AI brings new insights through the vast troves of information accumulated during the Big Data boom, there are scientific and ethical concerns to the manner and direction of growth of all artificial intelligence. Moreover, with jobs being lost to automation and Machine Learning, some engineers have been concerned about becoming redundant. This sentiment was prevalent throughout the conference. Few talks went by that did not collude in some way to this feeling within the people who power the tech industry. After the keynote speech, the fireside chat that kicked off the conversations especially focused on this.

The day would begin with a single track, which would then split into two parallel tracks after a mid-morning tea-break. Each parallel session pitted Artificial Intelligence against other streams, like Analytics, Software Architecture, DevOps and Security. Next came lunch, then some more parallel sessions. Another tea-break, then finally merging back together into a single track.

Let’s now take a look at some of the highlights and a few interesting talks of the day.



Manik Varma from Microsoft initiated the technical sessions of the day with his talk “The Extremes of Machine Learning”. He talked about his work on Extreme Classification, and showed an example on how his research would improve search mechanisms on e-commerce websites by providing context understanding of search terms.

Edge Machine Learning

Following that, he showcased his work on Microsoft’s EdgeML library, and ProtoNN and Bonsai algorithms. These technologies can be used to run ML engines on tiny microcontrollers with just 2 KB RAM and 32 KB Flash ROM! These processes can work in the cloud and make predictions locally using the EdgeML library, consuming very less resources. These engines can be hosted on microcontrollers as small as a dimple on a golf ball. This is extremely beneficial in application cases where the levels of size, processing power, memory and energy consumption are critically important, like medical pacemaker implants.

He also demonstrated the library in action by hooking up his walking cane with his phone, using physical gestures to have Cortana automatically read notifications off of his phone screen!



Next, James Geraci from Samsung took the stage with “Experience the future of work with AI and Automation”. He talked about how to garner intelligent information and value from data automatically using Data Intelligence (DI). He also defined some of the problems being faced in DI, like inconsistent data sets of the same type of information, how to sanitize it and use it.

Collection of Data

Some examples of use cases of DI were to use it to make our appliances and machines more intelligent. Such that in places with dynamic pricing of electricity, an intelligent washing machine would automatically know to switch itself on when prices are low, to save electricity costs. It could be used to build automatic cars that are aware of any obstruction or accident further down the road in real time, and would switch you to another route well ahead, so that you don’t get stuck.


Another important thing in DI is the size of information stored in a unit of memory. On this, he advocated using qubits instead of bits to store large scale data, as they are exponentially more efficient in terms of space, energy and time resources required for storage.

He also imparted some mindful advice not to rely on automation always, and use our own critical thinking from time to time. He narrated an incident about his colleagues in Seoul, who had been using ML and DI for seven months to garner insights from some gathered data about loss of refrigerator coolant. However, they had picked the the wrong base factors (what actually causes the loss of coolant), so their DI results did not make sense. They only realized their mistake when they stopped the automated process and sat down to rethink – and found their base factors had been wrong all along!



In the post-lunch session, “The Future of Artificial Intelligence”, Ramana Jampala from Avlino Inc gave very keen insights on the science of Artificial Intelligence and what this term actually means.

He explained that the basic reasons for adoption of AI practices by the industry are based on two factors: Reduce the costs, and increase the revenue base. In the furore of this trend, a lot of things are being misconceived and are passing for Artificial Intelligence though those are not actually AI.

He stressed that AI (as it should be) is the science which focuses on the transcendence of computational capacity over manually generated hardware or hardwired logic. This means that AI is not just a matter of spinning up an API and have a server send responses computed from pre-programmed rules. To be true AI, the logic itself should grow, learn and create new rules for itself.

The AI Trifecta

In order for a business to successfully navigate the waters of AI revolution, his advised businesses to target what he terms the AI Trifecta:

– Software Engineering
– Qualitative Analysis
– Domain Knowledge

Taken together, the above three factors will help in successful AI product development. This is why, he elaborated, the businesses that are more likely to create a successful AI product are:

– Larger corporations: As they have broad partnerships.
– Smaller but focused companies: As they have founders with deep domain knowledge, and are targeting a specific industry



NASSCOM had also conducted a Hackathon the previous day, focused on creating AI based solutions to problems being faced in India currently. The top winner were a group from Fidelity International that had created a solution to calculate GST for traders using AI. NASSCOM also awarded a special prize to a group of engineering students from IIT Gwalior that had created a fake news detection system, despite no professional experience!



Technology is hardly just about software. In line with this thought, the NATC organizers gave a special welcome to Team Indus from Bengaluru. Theirs is the domain of private sector space exploration. Recently several private companies have began their advent into this realm, SpaceX and Blue Origin being two of the most well-known. Come 2018, Team Indus will also count itself among them!

They are a private Indian aerospace startup, on a mission to land a space rover on the Moon. They are one of the finalists on Google’s Lunar XPRIZE competition. Sheelika Ravishankar from Team Indus gave a very spirited and exciting presentation on their mission and the origins of ECA – the robot that will mark India’s arrival on the surface of the moon.



The conference showcased various talented professionals and researchers from premier organizations, so it was rather interesting to see the ideation process behind the problems they are working on, and the solutions they are trying to effect.

In terms of composition, most of the talks were technically on a rather high level. It would have been nice to hear more talks that went deeper and explained the intricacies of AI on code level. It would also have been good to see some diversity in the panelists and speakers lineup.

NATC came across as a good place to network with people in the Indian IT industry. We took the opportunity and connected with various people as well. However, for a programmer who is looking to get their hands dirty in code and learn new concepts in technology, other developer-focused conferences and meetups would be better suited.

One of the main takeaways from the conference was that when it comes to AI, we should not simply get caught up in the flow of what is happening, rather take some time to think and understand what we want to do. That way we can create something meaningful with it, since it seems to have set the course of technology for the coming times.