As countries across the globe are working on regulatory frameworks for artificial intelligence (AI), tech major Google on Thursday pitched for a risk-based approach for such regulations. This means that instead of imposing uniform rules for all AI applications, the regulations should take into account the level of risk posed by different uses of AI.
“We are particularly in favour of AI regulations that are sort of risk-based, instead of a one-size fits all approach. The regulation should be in proportion to the risk different AI models pose. It should be at the level of the application rather than at the level of the technology,” Pandu Nayak, vice president of Search at Google, told FE on the sidelines of the Global Partnership on Artificial Intelligence (GPAI) Summit.
“You look at the applications and see what are the actual risks. ‘One size fits all’ kind of approach doesn’t make sense. For example, in agriculture risks from using AI are very different from what you might find in other areas,” Pandu added.
The application layer for generative AI means the stage where the technology is being deployed for use cases. The core technology of generative AI platforms is where the large-language models exist.
When asked about how Google is tackling biasness in its search results as well as generative AI platform Bard, Nayak said, “there’s a lot of research going on in terms of what biasness means and how you address it. Fundamentally, it can be addressed by training the models on good datasets. We are partly mitigating that by making sure that we don’t train on unsafe content”.
Lately, Google’s generative AI platform Bard made the rounds on social media when a user had flagged a screenshot, in which Bard refused to summarise an article by a right wing online media on the ground that it spreads false information and is biased. Post the instance, the government came up with an advisory that any instances of bias in the content generated through algorithms, search engines or AI models of platforms like Google Bard, ChatGPT, and others will not be entitled to protection under the safe harbour clause of Section 79 of the Information Technology Rules.
“I think, fundamentally, you have to ask yourself, what kind of bias are you concerned about? There are already laws in place that say certain types of biases are not allowed. So that is why we are pushing for a risk-based approach, proportionate to a particular use case,” Nayak said.
Another area, which Nayak talked about that would help in mitigating biasness of generative AI platforms is through cross-border trusted data flows.
“With the cross border data flow, the training data that can be used will represent a diverse demographic and diverse set of people, and that is useful to address bias,” Nayak said.
On Wednesday, minister of state for electronics and IT Rajeev Chandrasekhar told FE the government will share the public data available with it, only with firms which have a proven track record and can be categorised as trusted sources.
Nayak was in-line with the government’s approach to enable access of datasets to trusted AI platforms.