
[ad_1]
comment on this story
Artificial intelligence has rapidly moved from computer science textbooks to the mainstream, creating such delights as chatbots designed to reproduce celebrities’ voices and entertain rambling conversations.
But the technology, which refers to machines trained to perform intelligent tasks, also threatens deep disruption to social norms, entire industries and the fortunes of tech companies. Some experts say it has great potential to transform everything from diagnosing patients to predicting weather patterns – but it could also leave millions out of work or even surpass human intelligence Is.
Last week, the Pew Research Center released a survey in which a majority of Americans – 52 percent – said they feel more worried than excited about the growing use of artificial intelligence, including personal privacy and human control over new technologies. concerns about.
A curious person’s guide to artificial intelligence
This year the proliferation of generic AI models such as ChatGPT, Bard and Bing, all of which are available to the public, brought artificial intelligence to the forefront. Now, governments from China to Brazil and Israel are trying to figure out how to harness AI’s transformative power, while reining in its worst excesses and setting rules for its use in everyday life. Draft should be prepared.
Some countries, including Israel and Japan, have responded to its lightning-fast growth by clarifying existing data, privacy and copyright protections – in both cases clearing the way for using copyrighted material to train AI. Others, such as the United Arab Emirates, have issued vague and broad announcements about AI strategy, or launched working groups on AI best practices, and published draft legislation for public review and consultation.
Others still have taken a wait-and-see approach, even as industry leaders including OpenAI, maker of the viral chatbot ChatGPT, have urged international cooperation around regulation and oversight. In a statement in May, the company’s CEO and two of its co-founders warned against the “possibility of existential risk” associated with superintelligence, a hypothetical entity whose intelligence would exceed human cognitive performance.
“Preventing this would require something like a global monitoring system, and even that is not guaranteed to work,” the statement said.
Nevertheless, there are few concrete laws around the world that specifically target AI regulation. Here are some of the ways lawmakers in different countries are trying to address questions about its use.
Brazil has a draft AI law that is the culmination of three years of proposed (and stalled) bills on the subject. The document – which was released late last year as part of a 900-page Senate Committee report on AI – carefully outlines the rights of users interacting with AI systems and how they are based on risk to society. Provides guidelines for classifying different types of AI.
The law’s focus on users’ rights places an obligation on AI providers to provide users with information about their AI products. Users have a right to know that they are interacting with an AI – but also a right to an explanation of how the AI made a certain decision or recommendation. Users can also protest AI decisions or demand human intervention, especially if the AI decision is likely to have a significant impact on the user, such as systems that deal with self-driving cars, rental, credit assessment, or biometric identification. are related.
AI developers are also required to conduct a risk assessment before bringing an AI product to market. The highest risk classification refers to any AI system that deploys “unconscionable” techniques or exploits users in ways that are harmful to their health or safety; They are strictly prohibited. The draft AI law also outlines potential “high-risk” AI implementations, including AI used in healthcare, biometric identification and credit scoring, among other applications. Risk assessments for “high risk” AI products are to be publicized in government databases.
All AI developers are liable for damages caused by their AI systems, although developers of high-risk products are held to an even higher standard of liability.
China publishes a draft regulation for generative AI Public opinion is being sought on the new rules. However, unlike most other countries, China’s draft states that generative AI should reflect “socialist core values”.
In its current iteration, the draft rules state that developers “take responsibility” for the outputs created by their AI, according to a translation of the document by Stanford University’s DigiChina project. There are also restrictions on the source of the training data; Developers are legally liable if their training data infringes on someone else’s intellectual property. The regulation also stipulates that AI services must be designed to generate only “true and accurate” content.
The proposed rules build on existing legislation related to deepfakes, recommendation algorithms and data protection, putting China ahead of other countries in drafting new laws. The country’s internet regulator also announced a ban on facial recognition technology in August.
China has set dramatic goals for its tech and AI industries: In the “Next Generation Artificial Intelligence Development Plan,” an ambitious 2017 document published by the Chinese government, the authors write that by 2030, “China’s AI principles, technologies, and applications will dominate the world.” Achieve leadership status.
Will China overtake America in terms of AI? Probably not. here’s why.
In June, the European Parliament voted to approve what it calls the “AI Act”. Similar to Brazil’s draft law, the AI Act classifies AI in three ways: as unacceptable, high and limited risk.
AI systems deemed unacceptable are those deemed a “threat” to society. (The European Parliament cites “voice-activated toys that encourage dangerous behavior in children” as an example.) These types of systems are banned under the AI Act. High-risk AI needs to be approved by European authorities before going to market and throughout the product’s life cycle. These include AI products related to law enforcement, border management and employment screening, among others.
AI systems deemed to be of limited risk must be appropriately labeled to allow users to make informed decisions about their interactions with the AI. Otherwise, these products escape most regulatory scrutiny.
The act still needs to be approved by the European Council, although parliamentary lawmakers expect the process to be finished later this year.
Europe moves forward on AI regulation, challenging the power of the tech giants
In 2022, Israel’s Ministry of Innovation, Science and Technology published a draft policy on AI regulation. The document’s authors describe it as “ethical and business-oriented guidelines for any company, organization or government body involved in the field of artificial intelligence” and emphasize its focus on “responsible innovation”.
Israel’s draft policy states that the development and use of AI “must respect the rule of law, fundamental rights and the public interest and, in particular, respect.” [maintain] human dignity and privacy.” Elsewhere, ambiguously, it states that “appropriate measures should be taken in accordance with accepted professional concepts” to ensure that AI products are safe to use.
More broadly, the draft policy encourages self-regulation and a “soft” approach to government intervention in AI development. Rather than proposing uniform, industry-wide legislation, the document encourages sector-specific regulators to consider highly-tailored interventions when appropriate, and encourages government to strive for compatibility with global AI best practices. does.
In March, Italy briefly banned ChatGPT, citing concerns about how and how much user data was being collected by the chatbot.
Since then, Italy has allocated nearly $33 million to support workers at risk of being left behind by digital transformation – including but not limited to AI. About one-third of that amount will be used to train workers whose jobs may become obsolete due to automation. The remaining funds will be directed towards teaching digital skills to unemployed or economically inactive people, to boost their entry into the job market.
As AI replaces jobs, Italy is trying to help retrain workers
Japan, like Israel, has adopted a “soft law” approach to AI regulation: there are no prescriptive rules governing specific ways AI can and cannot be used in the country. Instead, Japan has opted to wait and see how AI develops, citing a desire to avoid stifling innovation.
For now, AI developers in Japan have to rely on impending laws – such as those related to data protection – to serve as guidelines. For example, in 2018, Japanese lawmakers amended the country’s Copyright Act, allowing the use of copyrighted material for data analysis. Since then, lawmakers have clarified that the amendment also applies to AI training data, clearing the way for AI companies to train their algorithms on other companies’ intellectual property. (Israel has taken the same approach.)
Regulation is not at the forefront of every country’s approach to AI.
For example, in the UAE’s National Strategy for Artificial Intelligence, only a few paragraphs are given to the country’s regulatory ambitions. In short, an Artificial Intelligence and Blockchain Council will “review national approaches to issues such as data governance, ethics and cyber security” and oversee and integrate global best practices on AI.
The rest of the 46-page document is devoted to encouraging AI development in the UAE by attracting AI talent and integrating the technology into key sectors such as energy, tourism and healthcare. The document’s executive summary claims that the strategy is in line with the UAE’s efforts to become “the best country in the world by 2071”.
Source
[ad_2]