A Google AI has successfully created a more powerful AI

A Google AI has successfully created a more powerful AI

A Google AI has successfully created a more powerful AI

Google has been developing their own AI for quite some time, though it is undeniable that human-made AIs take a lot of time and testing to improve, especially given the early states of both AI software and hardware.

One question that inevitably comes to mind with AI is simple, could an AI be created to develop a better AI in a faster timeframe? This problem is what Google Brain wanted to solve with their AutoML project, where a “parent AI” would create its own “child AI”to see how this new AI compares to its human-made counterparts. 

The “child AI” in this case is called NASNet (Not quite SkyNet), an AI which is designed to detect and identify objects like people, cars, kites, handbags and backpacks. In simple terms, the AI can recognise what is visible in a photograph or video, even real-time video. 

When NASNet was tested on ImageNet’s validation test, it was discovered that Google’s AI taught AI was 82.7% accurate, making it 1.2% more accurate than any previously published result while also operating with higher levels of efficiency. When a less demanding version of NASNet was tested, a mobile-oriented variant, it was found to be 3.1% more accurate than similar demanding AIs. 

A Google AI has successfully created a more powerful AI

(An example of an AI image recognition test image)

Google’s research showcases the benefits of using AIs to train other AIs, as more efficient and accurate algorithms will be much easier to apply to real-world applications. 

While the benefits of AI-assisted AI creation are clear, problems arise when problems exist in parent AIs, allowing unwanted quirks and biases to be passed on to “child AIs”. This isn’t likely to cause a Terminator-style “SkyNet” problem, though it does have the potential to create flawed AIs if incorrect information is passed on.

You can join the discussion on Google’s AI-trained NASNet AI on the OC3D Forums.  

A Google AI has successfully created a more powerful AI

A Google AI has successfully created a more powerful AI

Google has been developing their own AI for quite some time, though it is undeniable that human-made AIs take a lot of time and testing to improve, especially given the early states of both AI software and hardware.

One question that inevitably comes to mind with AI is simple, could an AI be created to develop a better AI in a faster timeframe? This problem is what Google Brain wanted to solve with their AutoML project, where a “parent AI” would create its own “child AI”to see how this new AI compares to its human-made counterparts. 

The “child AI” in this case is called NASNet (Not quite SkyNet), an AI which is designed to detect and identify objects like people, cars, kites, handbags and backpacks. In simple terms, the AI can recognise what is visible in a photograph or video, even real-time video. 

When NASNet was tested on ImageNet’s validation test, it was discovered that Google’s AI taught AI was 82.7% accurate, making it 1.2% more accurate than any previously published result while also operating with higher levels of efficiency. When a less demanding version of NASNet was tested, a mobile-oriented variant, it was found to be 3.1% more accurate than similar demanding AIs. 

A Google AI has successfully created a more powerful AI

(An example of an AI image recognition test image)

Google’s research showcases the benefits of using AIs to train other AIs, as more efficient and accurate algorithms will be much easier to apply to real-world applications. 

While the benefits of AI-assisted AI creation are clear, problems arise when problems exist in parent AIs, allowing unwanted quirks and biases to be passed on to “child AIs”. This isn’t likely to cause a Terminator-style “SkyNet” problem, though it does have the potential to create flawed AIs if incorrect information is passed on.

You can join the discussion on Google’s AI-trained NASNet AI on the OC3D Forums. Â