🇧🇩 Artificial Intelligence-----It's challenges and Prospects in Bangladesh

  • Thread starter Thread starter Saif
  • Start date Start date
  • Replies Replies 11
  • Views Views 427
G Bangladesh Defense Forum

Saif

Senior Member
Joined
Jan 24, 2024
Messages
5,485
Reaction score
1,950
Origin

Residence

Axis Group

Military Role

Religion

Govt drafts AI policy to tap its potential, tackle concerns


1720572584409.png



The government has formulated the draft National AI policy as it looks to make the best use of artificial intelligence to raise productivity and spur economic growth while dealing with the concerns presented by the technology spreading at a breakneck pace.

"This policy seeks to harness the benefits of AI while mitigating its risks, fostering innovation, and ensuring that AI technologies serve the best interests of the citizens and the nation as a whole," the draft said.

The Information and Communication Technology Division prepared the National AI Policy and published it recently.

The policy is expected to address the legal, ethical, and societal implications of AI effectively and efficiently.

It has placed a significant emphasis on public awareness and education, enlightening citizens about AI and its far-reaching benefits.

The objectives of the policy are to accelerate equitable economic growth and productivity through AI-driven optimisation, forecasting, and data-driven decision-making, and ensure efficiency and accessibility of public services through AI-enabled personalisation.

The draft comes as countries around the world race to prepare to deal with the changes being brought about by the fast-evolving technology.

The International Monetary Fund (IMF) has published its new AI Preparedness Index Dashboard for 174 economies, based on their readiness in four areas: digital infrastructure, human capital and labour market policies, innovation and economic integration, and regulation.

It showed Bangladesh's score stands at 0.38 compared to 0.49 of India, 0.37 of Pakistan, 0.35 of Nepal, 0.44 of Sri Lanka, 0.77 of the US, 0.64 of China, and 0.73 of Australia. Developed countries have a score of at least 0.7.

In Bangladesh, the government plans to adopt data-driven policy-making in every sector through AI-supported analytics and insights and nurture a skilled workforce that can utilise and build AI technologies.

It wants to embed AI in education and skills development so that the largely young population can meet the demands of the future.

The draft said the country will also foster a culture of AI research and innovation through public and private funding. It will ensure development and adhere to a robust ethical framework by establishing regulatory measures that uphold human rights in AI development and deployment.

The ICT Division, in collaboration with relevant ministries, industry, academia, and civil society, will take necessary steps to establish the institutional framework for the policy implementation, officials said.

It will set up an independent National Artificial Intelligence Center of Excellence (NAICE).

The NAICE will be responsible for coordination and monitoring of AI initiatives using key performance indicators and evaluation of AI initiatives' social, economic, and environmental impacts, guiding adjustments for maximum benefits and risk mitigation.

It will facilitate collaboration and knowledge-sharing among stakeholders, including government agencies, industry, academia, and civil society. It will ensure that any measures taken to regulate the technology are proportional to the risk and balanced to encourage innovation.

The government will form a high-level national AI advisory council to guide the implementation of sectoral AI initiatives.

The draft said the legal and regulatory frameworks are necessary for AI policy implementation.

The National Strategy for AI will be framed, and it will be updated every two years in accordance with the advancement of AI worldwide.

The strategy will include data retention policies, deal with the legal issues of data governance and ownership and focus on interoperability and data exchange.

According to IMF's economist Giovanni Melina, AI can increase productivity, boost economic growth, and lift incomes. However, it could also wipe out millions of jobs and widen inequality.

IMF's research has shown how AI is poised to reshape the global economy. It could endanger 33 percent of jobs in advanced economies, 24 percent in emerging economies, and 18 percent in low-income countries.

But, on the brighter side, it also brings enormous potential to enhance the productivity of existing jobs for which AI can be a complementary tool and to create new jobs and even new industries.

Melina said most emerging market economies and low-income countries have smaller shares of high-skilled jobs than advanced economies, and so will likely be less affected and face fewer immediate disruptions from AI.

"At the same time, many of these countries lack the infrastructure or skilled workforces needed to harness AI's benefits, which could worsen inequality among nations."

The economist said the policy priority for emerging markets and developing economies should be to lay a strong foundation by investing in digital infrastructure and digital training for workers.​
 

Vision for an AI law in Bangladesh
1720744477457.png

VISUAL: STAR

We may not notice it at first glance, but the world is going through a technological renaissance in the form of Artificial Intelligence. In the bustling heart of South Asia, Bangladesh also stands on the cusp of this technological renaissance. As artificial intelligence (AI) is transforming industries worldwide, the nation is facing an urgent call to draft forward-thinking AI policy guidelines. Imagine a future where Dhaka's traffic is managed by smart systems, farmers use AI to boost crop yields, and healthcare is revolutionised by predictive analytics. In the current situation, this thought might be farfetched, but this promising horizon can come sooner rather than later if Bangladesh can establish a policy guideline that can navigate its set of challenges and responsibilities. Bangladesh must utilise the AI wave with a blend of ambition and caution, ensuring that innovation does not eclipse ethics and inclusivity.

Crafting Bangladesh's AI policy is a bit like preparing the perfect biriyani: it requires just the right mix of ingredients to create something truly remarkable. The focus should be on blending robust international cooperation to ensure trustworthy AI, with a generous helping of digital infrastructure to provide the computing power needed for innovation. Stir in public awareness and civic engagement to keep society informed and involved, sprinkle generously with investments in public research capabilities, and don't forget to season with education and skill development to prepare the workforce for future challenges. Finally, top it off with initiatives to boost connectivity and digitalisation, and a strategic vision aligned with the Fourth Industrial Revolution (4IR) to ensure sustainable growth. When these elements come together, Bangladesh can set up a tech policy that's not only cutting-edge, but also inclusive and forward-thinking. So, what elements should be on the radar of policymakers when crafting this pivotal piece of legislation?

First of all, AI's transformative potential can only be fully realised if there's a skilled workforce to harness it. Thus, the AI Act must prioritise comprehensive education and training programmes. Integrating AI into school curriculum, establishing vocational training programmes, and offering scholarships for AI-related fields are essential steps. This not only prepares the next generation for AI-driven jobs but also ensures that the current workforce is not left behind.

To position Bangladesh as a leader in AI innovation, substantial investments in AI research and development (R&D) are crucial. This involves funding interdisciplinary research collaborations between universities, research institutions, and private industries. The goal should be to advance AI capabilities in key sectors like healthcare, agriculture, and manufacturing. Think about AI research centres and innovation hubs sprouting across the country, nurturing a vibrant ecosystem of startups and entrepreneurs. We need to build a digital infrastructure. For example, we can talk about the recent policy on AI that was adopted by Vietnam. Within 2030, they want to build an "AI ecosystem." It means a fluid incorporation of AI in the commercial and research sectors. This ecosystem may include various components that work together to foster innovation, ensure ethical standards, and drive the economic and societal benefits in the country. Most importantly, to foster AI innovation in Vietnam, the government is taking a comprehensive strategy focusing on human resource development, organisational construction, research and development, and investment in AI enterprises. This includes deploying basic AI and data science skills through short and medium-term training courses for students and career-changing workers, attracting both domestic and foreign resources to build training centres, and establishing key research hubs at leading universities. Ultimately, the Vietnamese government wants to promote and attract investment capital for the growth of AI enterprises and brands in Vietnam.

Similarly, for AI to flourish, Bangladesh needs to prioritise investment in high-speed internet connectivity, cloud computing services, and data centres. This infrastructure will provide the necessary computing power and data storage capabilities for AI applications across various sectors. Building this infrastructure is another difficult but manageable task with the right collaborative efforts. If the government can encourage collaborations between itself, the private sector, and academia then Bangladesh can create a fertile ground for AI advancements tailored to local needs. And not only through local collaboration, but Bangladesh should also establish partnerships with other countries, international organisations, and tech companies to develop shared principles and standards for trustworthy AI. This can also position the country as a trustworthy player in the global AI landscape, attracting international investments and collaborations.

We have to remember that, in the current AI-driven world, data is the new oil. However, with great data comes great responsibility. Ensuring a robust data security infrastructure is paramount. The AI Act should mandate the use of cutting-edge encryption technologies, continuous monitoring systems, and regular security audits to protect sensitive information from cyber threats. This is not just about safeguarding individual privacy but also about building trust in AI systems. A data breach in a healthcare system powered by AI could have catastrophic consequences, undermining public confidence. Therefore, a comprehensive focus on data security is non-negotiable.

Another thing to note is that AI is a double-edged sword. While it has the potential to drive unprecedented progress, it also poses significant ethical challenges. The AI Act should establish clear guidelines for ethical AI deployment, ensuring transparency, accountability, and fairness. It is very easy to misuse AI and promote misinformation or biases. Nowadays, AI can create realistic but fake images, videos, and audio (deepfakes), which can be used to spread misinformation, manipulate public opinion, or damage reputations. It also increases fraud and scams because there are no established guidelines to use of AI systems. AI can perpetuate and amplify biases present in their training data, leading to discriminatory practices in hiring, lending, law enforcement, and more. So, establishing data protection laws to safeguard personal information and fostering transparency in AI decision-making processes are critical steps toward ethical AI.

All in all, crafting an AI Act for Bangladesh goes beyond technicalities, it centres on people and public engagement. Essential initiatives include public awareness campaigns to educate citizens on AI's benefits and risks, and involving diverse stakeholders—civil society, academia, and industry—in policymaking to ensure transparency and accountability. The Act should also align with Fourth Industrial Revolution (4IR) principles which emphasises innovation, digitalisation, and interconnectedness. By balancing innovation with regulation and addressing local needs while adhering to global standards, Bangladesh can harness AI's transformative power for an inclusive, ethical, and sustainable digital future.

This is Part II of a two-part series. The first part was published in Page 9 of The Daily Star on June 23, 2024.

Warda Ruheen Bristi and Shafin Haque Omlan are research associates at Bangladesh Institute of Governance and Management (BIGM).​
 

Why we need an AI law in Bangladesh
1720744633670.png

Visual: Star

On March 13, 2024, the European Union (EU) passed the world's first Artificial Intelligence (AI) Act. In an era defined by rapid technological advancement, the rise of AI presents both unprecedented opportunities and complex challenges for nations worldwide, and the EU has become the first entity to formally address those. In Bangladesh, a country in the midst of a rapid technological evolution, the emergence of AI stands as a pivotal moment with far-reaching implications. As the nation seeks to harness the potential of AI for societal advancement and economic growth, the importance of establishing a robust legal framework to govern AI-related issues cannot be overstated.

First, let's talk about the potential of AI. In short, it is vast and varied, touching virtually every aspect of human life. AI has the capability to automate repetitive tasks across various industries, freeing up human workers to focus on more creative and strategic endeavours. In healthcare, AI can assist in diagnosing diseases, analysing medical images, and personalising treatment plans, leading to more accurate and efficient healthcare delivery. Similarly, AI has the potential to revolutionise education through personalised learning experiences, adaptive tutoring systems, and virtual classroom assistants. In transportation, AI-powered autonomous vehicles can enhance road safety, reduce traffic congestion, and improve transportation efficiency. The financial sector can benefit from AI algorithms used for fraud detection, risk assessment, portfolio management, and customer service. Additionally, AI can contribute to environmental conservation efforts by analysing large datasets to monitor and predict environmental changes, optimise resource management, and develop sustainable solutions. In the realm of entertainment, AI-generated content such as music, art and literature can inspire creativity and offer new forms of entertainment.

Already, multiple countries have approved or are on their way to approving regulations regarding AI. The EU's AI Act is helping them focus on establishing a common cybersecurity framework across the bloc, enhancing security certifications for ICT products, and strengthening the role of the European Union Agency for Cybersecurity. The EU has also drafted a Cybersecurity Act. India has passed the Digital India Act, 2023, which aims to update and modernise India's digital governance framework. It addresses cybersecurity, data privacy, and ethical AI use. Vietnam also has approved a national digital transformation plan, which aims at promoting a digital transition in governance, the economy, and society more broadly, as well as establishing Vietnamese technology firms as global players. Under this plan, several goals are laid out to be achieved by 2025. They have also developed a national strategy for research, development and application of AI by 2030. This strategy outlines a number of key goals and directives for developing AI technology in Vietnam. It's clear that Vietnam is committed to a digital transition and cannot ignore the role that artificial intelligence will play to that end.

Similarly, our aspirations for a Digital Bangladesh hinges on our ability to navigate the complexities of AI responsibly. When talking about establishing a regulatory framework for Bangladesh, the heart of the discussion lies within the imperative to strike a balance between fostering innovation and safeguarding fundamental rights. A balanced regulatory framework is essential not only for spurring innovation and attracting investment, but also for safeguarding against potential risks associated with AI.
Aside from the potential of AI, there are concerning reasons too for which an AI law is necessary. AI poses several risks to personal and organisational safety, hence it must be carefully managed to ensure responsible and ethical use of the technology. For individuals, AI can compromise privacy if systems improperly collect, store, or use personal data without consent or appropriate safeguards. This can lead to identity theft, unauthorised surveillance, and exploitation of personal information. We are already seeing lots of examples of deepfake videos where the faces of famous personalities are used in spreading rumours or damaging their reputation. If AI is not regulated quickly, these incidents may soon get out of hand, and for a country like Bangladesh, where people are somewhat susceptible to rumours, they might spread like wildfire. AI systems may also perpetuate bias or discrimination if not properly designed and vetted, impacting individuals unfairly in areas such as hiring, lending or legal judgments.

Moreover, in order to shed light on Bangladesh's aspiration to become a digital powerhouse and a hub for innovation, the enactment of an AI law is instrumental in enhancing the country's competitiveness and global standing. By aligning with international best practices and standards, the country can foster international collaboration, attract foreign investment, and strengthen its position in the global AI landscape. The timely development of an AI act will present an opportunity for Bangladesh to showcase its commitment to ethical AI governance and responsible innovation to the rest of the world. By engaging with stakeholders across the government, industry, academia, and civil society, policymakers can leverage diverse perspectives to develop inclusive and forward-thinking AI policies that reflect the country's values and priorities.

Seizing the momentum of the current trend and developing a legal framework for AI-related issues is crucial for Bangladesh's continued progress in the digital age. Aligning with international best practices and standards is essential to enhance the country's competitiveness and credibility in the global AI landscape. By demonstrating a commitment to ethical AI governance, Bangladesh can attract foreign investment, foster international collaboration, and strengthen its position as a leader in responsible AI development.

This article is Part I of a two-part series.

Warda Ruheen Bristi and Shafin Haque Omlan are research associates at Bangladesh Institute of Governance and Management (BIGM).​
 

We need to act on AI now, not have an act for it
1720744791737.png

Visual: Shaikh Sultana Jahan Badhon
When Bangladesh embarked on its journey towards Digital Bangladesh in 2009, many were sceptical about it. But as time progressed, we all saw how the vision started to become a reality.

This vision, at its core, aspires to create a nation that is adept at solving problems at all spheres of life through innovative application of digital technologies. The government has made it abundantly clear that Artificial Intelligence (AI) is going to play a pivotal role in implementing the Smart Bangladesh vision. Following this vision, the government has recently unveiled a draft National Artificial Intelligence (AI) Policy 2024 for public consultation.

There is a good reason why the government has decided to use AI as the fulcrum to realise the goal of Smart Bangladesh. Unlike other digital technologies, the potential of application of AI is literally all around us. Starting from our personal lives, to modernising public service delivery, the scope for AI is limitless.

Be it public transport or AI-driven personal vehicles, personal healthcare solutions or the public healthcare system, from individual human resource productivity or national competitiveness in productivity levels, every imaginable aspect of our individual, societal, as well as national issues can have a transformative impact if we can smartly apply AI to solve our problems.

But the question is this: how do we facilitate AI to deliver the dividends for us? If we look around, we can see that every country in the world is trying to strike a balance between innovation and regulatory oversight. There is palpable consensus on adopting more of a business-friendly approach to AI regulation, by avoiding excessive restrictions.

The government has been trying to create a pathway for AI in Bangladesh by preparing the National Strategy for AI in 2020, followed by the recent release of the draft AI Policy in 2024. Having read the draft policy on AI, I felt that it provides an excellent template to foment the use of AI in every sector. The institutional framework outlined in the policy to pursue AI projects is well thought through. On top of that, the sectoral plans for application of AI provides an excellent starting point.

But what puzzles me is the stated desire of the government to introduce an Act for AI. When we are supposed to allow as much room as possible for our AI practitioners to fully demonstrate their talent, we are planning to limit what they can and can't do along with defined punitive measures through the AI Act. I am certain that this is not how you invite people into the fold of new technology.

As of now, the European Union (EU) is the only entity to have enacted an AI Act. At the heart of the Act, it is mandatory to ensure that AI platforms are monitored or overseen by human beings, not another AI platform. It's worth noting that many AI experts have termed this as a knee-jerk reaction as they consider a law on AI to be too premature at this stage.

The US does not have a federal law covering AI, nor is there any universal definition for AI. It is currently governed by a mix of decentralised existing federal and state legislations, industry itself and the courts. Through an executive order last year, every US government agency was tasked to set up working groups to evaluate AI, develop regulations and establish public-private engagement.

In United Kingdom (UK), the government has unveiled its response to AI Regulation White Paper consultation in February 2024. They don't have any plans to codify that into law for now. It advocates a context-sensitive, balanced approach, using existing sector-specific laws for AI guidance.

In India, the upcoming Digital India Act is set to focus on the regulation of high-risk AI applications. No plan to enact separate legislation is afoot. Singapore also doesn't have any AI legislation; they have a sector specific approach to overall governance and regulation. Japan also has a relatively hands-off approach and has been encouraging AI development and application across various sectors.

The Association of South East Asian Nations (ASEAN) has issued a guide to AI governance and ethics in February 2024. The national-level recommendations include nurturing AI talent, upskilling workforces and investing in AI research and development. Australia also doesn't have any AI legislation; the government there is approaching it with a voluntary ethics framework.

It's worth noting that the core purpose of having a law is to create a framework for dos and don'ts in a particular area with the option to resort to the legal system to settle disputes or punish offenders of the law. The question here is, how do we know what is doable and what is not, when we don't have any prior experience with AI in Bangladesh.

Even if we consider enacting a law, we need to ascertain areas where government regulation is needed, in light of the global best practices. AI law or policy considerations should include the use and processing of personal data, privacy, infringement, surveillance, algorithm bias in customer interactions, data sovereignty, monitoring AI based platforms, cybersecurity, and social norms and values etc. Most importantly, we need to focus on the fundamental ethical aspects of AI, which are more universally agreed upon compared to specific AI regulations.

We must realise innovation involves a very messy and unstructured process. The key to innovation is to have a creative mindset that can go beyond conventional thinking to come up with the simplest of solutions to complex problems. Putting barriers on this through an AI Act is the last thing we need at this moment.

If we want to meet the export earnings target of $5 billion from the ICT sector, we need to facilitate our developers to catch up with the rapid pace of AI development globally, instead of scaring them off with an act that comes with punitive measures. More AI regulation risks stifling new start-ups who lack the resources of the globally dominant platforms. We need to focus on creating a large pool of highly skilled human resources in AI. The draft AI policy provides a baseline to embark on this AI journey.

Shahed Alam is a barrister and telecom expert.​
 

Why AI tech needs to be democratised

1720744943574.png

We must take seriously the legitimate concerns about AI that have been raised. FILE PHOTO: REUTERS

With the introduction of "large language models" (LLMs) in our day-to-day lives, artificial intelligence (AI) systems have experienced a sharp surge in popularity. It is already apparent that the usage of AI systems will drastically impact our professional lives, private lives, and – perhaps most crucially – how we structure and govern our societies. This isn't because algorithms are inherently more innovative than people; instead, they provide economic stability and efficiency in completing many simple and complex tasks at a level that many humans cannot match.

The introduction of AI systems in public administration and the judicial system, as well as their use concerning the provision of certain essential services by private actors, raises serious concerns about how to safeguard the sustained protection of human rights and democracy and respect for the rule of law, if AI systems assist or even replace human decision-makers. This contrasts with the general public debate, which focuses on the AI technology's economic benefits and drawbacks. The very foundations of liberal democracy, such as elections, the freedom to assemble and establish associations, and the right to have opinions and to receive or disseminate information, may all be severely impacted by their use.

Recent calls for a ban on AI technology have come from influential voices in the public discourse who believe that the risks it brings exceed its benefits. Though we must acknowledge that the genie is out of the bottle and that there is no practical way to turn back the scientific and technological advancements that have made it possible to develop sophisticated and potent AI systems, we must also take seriously the legitimate concerns about AI that have been raised.

The Council of Europe (CoE), the oldest intergovernmental regional organisation, with 46 member-states and perhaps best known globally for its European Court of Human Rights (ECtHR), started groundbreaking research on the viability and necessity of an international treaty on AI based on its own and other pertinent international legal norms in the fields of democratic values, human rights advocacy, and the rule of law commitments in 2019. The Committee on Artificial Intelligence (CAI), formed for the period of 2022-2024, is tasked with developing an AI framework convention that will outline legally binding standards, guidelines, rights, and obligations regarding the creation, development, application, and decommissioning of AI systems from the perspectives of human rights, democracy, and the rule of law.

It will take a coordinated effort from like-minded states and assistance from civil society, the tech sector, and academics to complete this enormous undertaking. Our hope and ambition is that the Council of Europe's AI framework convention will provide much-needed legal clarity and guarantees of the protection of fundamental rights.

But a genuine setup of standards for the human rights and democratic features of AI systems cannot be restricted to a particular region, because AI technology knows no borders. As a result, the CoE's Committee of Ministers decided to permit interested non-European states that share its goals and ideals to participate in the negotiations, and an increasing number of these states have already signed on or are actively working to join the efforts.

The European Union (EU), which regulates AI systems for its 27 member-states, is also directly involved in the CoE negotiations. The AI Act of the EU and the CoE's framework convention are designed to complement one another when they go into effect, showing how to effectively utilise the joint capabilities and skills of the two European entities. The draft framework convention is aimed at ensuring that the use of AI technology does not result in a legal vacuum regarding the protection of human rights, the operation of democracy and democratic processes, or the observance of the rule of law (a consolidated "working draft" is publicly available at the CoE website for the CAI).

To this end, parties must obligate regulators, developers, providers, and other AI players to consider dangers to human rights, democracy, and the rule of law from the moment these systems are conceived and throughout their existence. In addition, the legal system that victims of human rights breaches have access to should be modified in light of the unique difficulties that AI technologies present, such as their transparency and rationalisation.

The treaty will also specifically address the potential risks to democracy and democratic processes posed by AI technology. This includes the use of the so-called "deep fakes," microtargeting, or more overt violations of the freedoms of expression, association, opinion formation, and the ability to obtain and disseminate information. The framework convention will include enforceable duties for its parties to offer such practices with adequate protection. When developing and employing AI systems that may be used in sensitive contexts, including but not limited to the drafting of laws, public administration, and last but not least, the administration of justice through the courts of law, it is evident that the fundamental idea of what constitutes a just and liberal, law-abiding society must be respected. The framework convention will also specify the parties' precise obligations in this area.

The draft framework convention, as well as all of CAI's work, prioritises human dignity and agency by taking a Harmonised Risk-Driven Approach (HRDA) to design, develop, use and decommission AI systems. It's crucial to carefully analyse any potential adverse effects of deploying AI systems in diverse circumstances before getting carried away by the apparent possibilities given by this technology. Therefore, parties are also required by the proposed framework convention to spread knowledge about AI technology and to encourage an informed public discussion about its proper application.

To ensure that as many people as possible profit from AI and other digital technologies and are protected from their misuse, the realistic approach must be to discover responsible methods to use them. It will take a coordinated effort from like-minded states and assistance from civil society, the tech sector, and academics to complete this enormous undertaking. Our hope and ambition is that the Council of Europe's AI framework convention will provide much-needed legal clarity and guarantees of the protection of fundamental rights.

Dr Nafees Ahmad is associate professor at the Faculty of Legal Studies in South Asian University, New Delhi.​
 

Artificial intelligence is still far from being 'intelligent'
Why does Big Tech want an immediate six-month pause on any further development?

1720745094066.png

An image of a robot taking a picture, generated by AI software Midjourney. SOURCE: REUTERS

Is artificial intelligence (AI) really "intelligent" in its creativity and decision-making? Or is it stealing others' works and perpetuating existing human biases?

This January, three artists filed a class-action lawsuit with the Northern California District Court against AI imagery generators – Midjourney, Stable Diffusion, and DreamUp. They claimed these companies are using their artwork to generate newer ones – using a publicly available database of images including theirs called LAION-5B – though the artists had not consented to have their copyrighted artworks to be included in the database, were not compensated for the use of their works, and their influence was not credited when AI images were produced using said works.

AI is literally scraping through billions of existing works produced by raw human labour to "produce newer ones." That's why several experts are already asking whether AI is at all "artificial" or "intelligent."

Tech writer Evgeny Morozov has argued that while the early AI systems were mostly rules and programmes, and could have some "artificiality," today's AI models draw their strength entirely from the works of actual humans. Built on vast amounts of human work stored at mammoth energy-hungry data centres, AI is not "intelligent" in the way human intelligence is as it cannot discern things without extensive human training, as Microsoft's Kate Crawford has pointed out.

Even in decision making, AI-models can have strong biases as a 2019 article published in Nature has confirmed. An algorithm common in US hospitals has been systematically discriminating against black people. The study found that hospitals traditionally assign them lower risk scores than white people. Automatically, the algorithm takes that as a cue and puts blacks in a lesser risk group, regardless of the prevailing medical conditions. In another case, a painting bot returned the image of a salmon steak in water when asked to draw a swimming salmon. The AI model couldn't make this simple judgement that even a toddler could do.

However, despite not being anywhere near "intelligent," recent developments, especially the release of ChatGPT in November last year, have raised dramatic concerns about the effects of AI on human society. Renowned tech experts have published an open letter calling for an immediate pause on all AI development for six months. Its signatories include many big names and AI heavyweights, including Elon Musk from Tesla, Emad Mostaque from Stability AI, Sam Altman from OpenAI, Demis Hassabis from Google's DeepMind, and Kevin Scott from Microsoft. Altman even advised the US government to issue licenses to trusted companies (Does this mean only Big Techs?) to train AI models.

Is this call for an immediate pause coming from genuine concern for human well-being? Or is there a commercial motive, as Michael Bennett, a PhD student at Australian National University (ANU) has pointed out? Potentially, AI can generate an enormous amount of wealth for whoever controls it. Let's try to understand the premise of the call.

ChatGPT isn't a research breakthrough, it's a product based on open research work that is already a few years old. The only difference is that the technology was not widely available through a convenient interface. Smaller entrepreneurs will soon develop better and more efficient AI-based models at much lesser costs, some of which is already available at GitHub, a popular repository for open-source non-commercial software. That worries the Big Techs, made abundantly clear by a leaked Google internal memo.

The long memo from a Google researcher said, "People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality…We Have No Moat." Licenses would be a "kinda moat," as Stability AI's CEO Emad Mostaque puts it bluntly, moat being jargon for a way to secure a business against competitors.

AI Now Institute, a research non-profit that addresses the concentration of power in the tech industry, highlights the perils of unregulated AI in its April 2023 report because the AI boom will make the powerful Big Techs even more powerful. AI models depend on vast amounts of data, and super-fast computing power to process it, both of which only Big Techs can afford. Without access to these resources, no entrepreneur or researcher can develop any meaningful AI application, as an MIT Technical Review article elaborates.

Yes, we need regulations for AI development, and a pause if necessary, but not for the reasons mentioned in the open letter. It's to ensure that AI technology remains open source and democratic.

The other reason AI should be regulated is the way social media platforms have used it to fuel gender bias and extreme polarisation, and played on social divisions resulting in unspeakable violence on a massive scale (such as in Myanmar using Facebook). AI models will amplify both intentional misinformation (simple inaccuracies) and disinformation (false information) simply because they are trained on such data to produce more data (model cannibalism effect). Large language models can keep repeating fabricated and false information because of a phenomenon called 'hallucination' which independent watchdog NewsGuard has found in several online news portals.

Intentional or otherwise, all these could be quite handy in manipulating public opinion or creating biases to benefit those in power. That makes it even more necessary to regulate AI. To ensure that the benefits of AI reaches everyone, humans must always be on top of it.

Dr Sayeed Ahmed is a consulting engineer and the CEO of Bayside Analytix, a technology-focused strategy and management consulting organisation.​
 

Why Bangladesh should invest in artificial intelligence
1720745232914.png

In the age of 4IR, investing in artificial intelligence (AI) would be the right move for Bangladesh to accelerate its economic growth. File Photo: AFP

In the 1970s, American sociologist and economic historian Immanuel Wallerstein (1930-2019) proposed an approach to view the global economic system as an interplay between three groups of countries: core, semi-periphery, and periphery countries. The core countries possess the highest levels of skills and knowledge and the largest amount of capital. The semi-periphery countries serve this group with lower-skill, labour-intensive production and raw materials. The periphery countries, in turn, service both groups with even lower skill levels and more labour-intensive production methods. The approach later came to be known as the World Systems Theory.

The system is dynamic: a country may move up or down the hierarchy depending on its technology, capital, or knowledge. Such movements involve fundamental shifts in a country's social and economic systems—e.g. production, distribution, learning, and skill level. For example, India was once an agriculture-based economic powerhouse, and the European traders clamoured to import its products. But as Europe became industrialised, the importers soon became exporters, and India's agriculture and home-based small industries drastically declined. The money extracted from India fuelled the First Industrial Revolution (late 18th century), funded research and development, and expanded the Western countries' knowledge base. Yale, a highly regarded American university, benefitted from Elihu Yale's (1649-1721) donations, who earned a fortune from the slave trade in India.

The second (late 19th century) and the third (mid-20th century) industrial revolutions soon followed on the back of the first. Both revolutions caused the status of countries to remain generally static; the core countries stayed ahead of the others primarily because of their control on capital, besides knowledge and skill.

The world is now going through the Fourth Industrial Revolution (4IR). According to Klaus Schwab, founder and executive chairman of the World Economic Forum (WEF), the 4IR has transformed the world with an entirely new production, management, and governance system. It can potentially alter Wallerstein's World Systems Theory, because skill and innovation will determine a society's place in the future, reducing dependence on capital. It thus opens new opportunities for the non-core emerging economies to move up Wallerstein's ladder. Schwab added that artificial intelligence (AI) would be a crucial driver of the 4IR. A good thing about AI is that emerging economies can also benefit from this technology without cost-prohibitive investments. The International Finance Corporation (IFC) highlights the same point with ideas and case studies in emerging economies under its thought leadership programme. Below are some of such case studies.

AI for emerging economies

Any effective poverty alleviation initiative needs data to identify vulnerable groups. However, the unavailability of quality data often leads to poorly designed interventions—such as incorrect identification of a vulnerable group—and their eventual failure. AI can analyse satellite images to extract relevant information, such as distance from the nearest water sources or the urban market, crop status, and other relevant variables for detecting vulnerability.

Bengaluru, in Southern India, is experimenting with a system to monitor real-time camera feeds to control traffic lights. In Rwanda, commercial drones are flying medical supplies, such as blood, to remote locations faster than road transport. AI can correlate data from mobile phones with financial affordability, education level, and health status. Such data will allow mobile applications to deliver microlending, tailored education, disease diagnosis, and medication advice. With Natural Language Processing (NLP) tools, AI can cross literacy barriers and communicate directly with an individual in any language.

Options for Bangladesh

MIT professors Erik Brynjolfsson and Andrew McAfee believe that technology will create abundance, but not everyone will benefit equally. Those with talent will be more likely to secure the high-skilled, high-pay jobs, leaving the low-skilled, low-pay ones for the rest. The 4IR's impact on societies will be determined not by technologies, but by the choices one makes. What choice will Bangladesh make?

So far, Bangladesh's economy has been heavily dependent on low-cost products, such as garments—earning more than 80 percent of total annual exports, according to BGMEA—and remittance from low-skilled migrant workers—over USD 24 billion in 2020-21, according to Bangladesh Bank. Should it continue providing low-cost production and labour? Or can it train its abundant young population and make use of the opportunities presented by the 4IR?

How Bangladesh can benefit from 4IR

A Brac study, titled "Youths of Bangladesh: Agents of Change," offers some interesting insights. Bangladeshi youths are not yet prepared to take the opportunities provided by the 21st century (i.e. 4IR), and their potential remains vastly unrealised. It has just about 600,000 tech freelancers, although a whopping one-third of its 163 million (World Bank's 2019 estimate) people are between 15 and 35 years. With the right skills and investment, these youths could become game-changers.

Bangladesh adopted its AI strategy in March 2020, although there is no visible follow-up yet. China adopted its AI development plan in July 2017. Within merely four years, the sheer scale of China's drive towards AI implementation is mind-boggling, as the think tank New America reported in "From Riding a Wave to Full Steam Ahead." China's government entities, universities, research institutes, local bodies, and corporations are spearheading its AI vision of becoming the global leader by 2030. A Forbes article already views China as the world's first AI superpower.

But Bangladesh is not China. The two countries' social, political, and economic systems are vastly different. Bangladesh must find a path to reap the benefits of AI technology. Given its focus on science and technology, Bangladesh can start by setting up a few dedicated AI research institutes and attracting top talents to work for them. It can initiate AI-based research programmes targeting local problems such as Bangla NLP, manufacturing process automation, farming support, tailored education, or healthcare service to remote populations. Low-cost production base and unskilled labour would soon become redundant, just like horses no longer pull carts or carry coal from the mines. The only way to remain relevant is to adopt technology for faster and more equitable growth. Bangladesh cannot afford to miss the opportunity that 4IR offers.

Dr Sayeed Ahmed is a consulting engineer and the CEO of Bayside Analytix, a tech-focused strategy consulting organisation.​
 

AI's ethical challenges require a multifaceted approach: Palak

1720829010645.png


There needs to be a multifaceted approach to address the ethical challenges posed by artificial intelligence (AI), said State Minister for ICT Division Zunaid Ahmed Palak.

AI should be used to close the gap on digital divides and empower society, rather than worsen existing inequalities, he said.

He also called for robust policy frameworks, regulatory measures and international cooperation to address these challenges.

The minister made these remarks at a "National Stakeholder Consultation on Assessing AI Readiness of Bangladesh", organised by the ICT Division in partnership with Unesco and Aspire to Innovate (a2i) at the ICT Tower in the capital recently.

The event highlighted the country's proactive approach in integrating AI to achieve its Sustainable Development Goals, according to a press release from the ICT Division.

The government is focusing on capacity building and regulatory frameworks and policies that ensure the ethical deployment of AI technologies, it read.

This is being achieved through collaborations with international organisations such as the United Nations Educational, Scientific and Cultural Organization (Unesco) and the United Nations Development Programme (UNDP), it added.

Md Shamsul Arefin, secretary of the ICT division, said AI can positively contribute to society through the ethical use of its transformative powers.

Md Mahmudul Hossain Khan, secretary on coordination and reforms to the Cabinet Division, stressed the significance of identifying gaps, opportunities and challenges in AI adaptation to formulate effective and sustainable strategies.

The event also featured insights from international representatives.

Charles Whiteley, ambassador and head of delegation of the European Union in Bangladesh, and Huhua Fan, OIC head of the Unesco Office in Dhaka, noted the importance of a comprehensive evaluation of AI readiness.

They opined that legal, social, cultural, scientific, economic and technical dimensions should be taken into consideration in this regard.

The event also saw discussions on integrating safe, trusted, and ethical AI considerations into strategies across various sectors, including education, transportation and agriculture.

The participants attended panel discussions and sessions that focused on the ethical implications and societal impact of AI technologies.​
 

Can Bangladesh leverage AI for inclusive growth?

1726444092884.png

The adoption of artificial intelligence in Bangladesh is still in its infancy, both for AI solution providers and their clients. Image: Possessed Photography/ Unsplash.

Artificial Intelligence (AI) has emerged as a transformative force in the global economy, positioning itself as a cornerstone of the fourth industrial revolution. From theory to practical solutions, AI has proliferated over the last decade. Defined as a simulation of human intelligence, AI combines technologies like machine learning, natural language processing and robotics to solve business, economic and social problems.

The impact of AI on developing economies can be illustrated through the experiences of regions like India. The 2023 Ernst & Young report evaluated the contribution of generative AI to India's economy at approximately USD 1.2-1.5 trillion in the next seven years, potentially increasing GDP by 5.2%-7.9%. AI is remarkably used in India for fraud detection and risk management in financial services and personalised learning in education.

Companies like HDFC Bank use AI algorithms to analyse transaction patterns and detect anomalies, thereby preventing fraud. Likewise, AI-powered platforms like Byju's offer personalised learning experiences for students, adapting to their learning pace and style. This has democratised access to quality education, helping millions of students across the country improve their academic performance.

What sectors in Bangladesh can be impacted by leveraging AI?

According to the World Bank, Bangladesh's average growth rate over the decade was 6.6%, with 5.8% in 2023. PwC estimated AI's total contribution to the global economy at USD 17 trillion by 2030. Given AI's estimated contributions to the Indian economy, AI could potentially generate billions of dollars for the Bangladesh economy.

There are four sectors in Bangladesh where AI can create a critical impact: In healthcare, it can enhance diagnostic accuracy, assist in early disease detection, manage patient data, and personalise treatment plans. For agriculture, AI-driven solutions can improve crop yields, optimise resource management, and contribute to food security. In manufacturing, particularly the RMG sector, which accounts for 80% of export revenue, AI can streamline production processes, enable predictive maintenance, and facilitate smart quality control. AI integration in RMG can optimise supply chains, forecast demand, and enable personalised designs. AI can automate processes, improve fraud detection, offer data-driven trading decisions, and enhance financial inclusion in financial services.

Across these sectors, AI technologies such as machine learning, computer vision, and natural language processing can analyse large datasets, identify patterns, and make data-driven decisions, ultimately increasing efficiency, reducing costs, and driving innovation in Bangladesh's economy.

Prominent Bangladeshi organisations are already at the forefront of AI innovation. For example, Intelligent Machines (IM) is a leading AI company in Bangladesh dedicated to using AI capabilities to solve problems and drive efficiency across various sectors. IM has successfully implemented AI in companies across telecom, financial institutions, and fast-moving consumer goods. Their AI-based services are reported to provide solutions with over 90% accuracy. The results that IM has provided thus far through AI integration are noteworthy. Unilever has achieved a 260% stretch target in 2021 using Fordo, a precision marketing AI product. BAT gained 253% improvements in brand campaign execution accuracy in 2021 using Shobdo, a speech recognition AI product. bKash gained 76% productivity and a 15% monthly onboarding growth rate with the help of Nimonton and Biponon, two retail AI products. IDLC Finance processes CIB reports in under 30 minutes using Dharapat, a FinTech AI product. Finally, Telenor saved 92.5% of the cost in completing 25 million KYCs in Myanmar using Borno and Chotur, two document verification AI products.

What are the challenges in AI implementation, integration and regulation?

The adoption of artificial intelligence in Bangladesh is still in its infancy, both for AI solution providers and their clients. Companies are reluctant to embrace AI solutions, not only due to a lack of infrastructure but also because of a shortage of understanding of these solutions. There may also be a reluctance to embrace AI solutions due to data privacy concerns. Similarly, the local growth of any AI-based service provider hinges on the readiness of consumers to adopt the technology. Finally, data availability is also a concern since, without data synthesising, the precision of AI solutions depends on the amount of relevant data fed to AI bots for machine learning.

Given this scenario, it is challenging for the government to craft policies that properly regulate the use of AI. The dilemma becomes whether the government should allow AI to proliferate for the sake of innovation or whether strong regulations should be in place well beforehand to uphold data privacy. Perhaps the best practice would be to balance innovation and regulation equally.

What are the prospects of AI in Bangladesh?

AI is catalysing change, enhancing productivity and efficiency, fostering innovation, and creating new avenues for growth. By investing in AI education and infrastructure, Bangladesh can position itself as a hub for innovation, attracting investment and talent while unlocking new opportunities for socioeconomic development.

The ICT Division of the Bangladesh government drafted a National AI Policy to address the challenges of AI adoption and implementation. The policy expands to ten sectors: telecommunication, data governance, environment, energy, and climate change. It introduces a robust framework for ethics, data privacy, and security, proposing the establishment of an independent National AI Center of Excellence and a High-Level National AI Advisory Council for facilitating and regulating AI services. The policy also provides detailed implementation plans for government ministries, academia, and private institutions. The objective is continuous monitoring, evaluation, and alignment with global advancements. It also addresses other challenges more thoroughly, offering specific mitigation strategies for data privacy, cybersecurity, and risk management.

Comparing AI's potential impact on Bangladesh with other developing countries like India and Africa reveals similar opportunities for economic evolution. As AI continues to engage with every corner of society, education and awareness of its usage and benefits are paramount. Adopting best practices, investing in infrastructure, and fostering a culture of innovation will be crucial in harnessing AI's benefits.

Rafsan Zia is a Business Consultant at LightCastle.​
 

Yunus voices concern over development of 'autonomous intelligence'

1727482445350.png

Photo: PID

Chief Adviser Prof Muhammad Yunus yesterday expressed cautions over the development of "autonomous intelligence" that may pose threats to human existence.

"As the scientific community and the world of technology keeps moving on developing 'autonomous intelligence' -- artificial intelligence that propagates on its own without any human intervention -- we all need to be cautious of the possible impact on every human person or our societies, today and beyond," he told the 79th session of the United Nations General Assembly (UNGA) in New York.

Delivering his speech in Bangla, Prof Yunus said many have reasons to believe that unless autonomous intelligence develops in a responsible manner, it can pose threats to human existence.

"We are particularly enthused with the emergence of the artificial intelligence tools and applications. Our youth are excited with the prospect of fast-unfolding generative AI. They aspire to walk and work as global citizens," he said.

The chief adviser said the world needs to ensure that no youth in countries like Bangladesh get left behind in meaningfully reaping benefits from the AI-led transformation.

The world simultaneously needs to ensure that the development of artificial intelligence does not diminish the scope or demand for human labour, he said.

He said every year, nearly two and a half million Bangladeshis enter labour market. "In a large population where nearly two-thirds is young, Bangladesh is challenged to make learning suited to meet the needs of today and tomorrow," he added.

Prof Yunus observed that the world of work is changing where a younger person has to adapt constantly, re-skill, and adopt newer attitudes.

"As Bangladesh is set to graduate as a middle income country, we reckon the vital need to secure ourselves in terms of learning and technology," he added.

He said newer forms of collaboration are needed where global business and knowledge-holders connect to people's needs.

International cooperation should create space for developing countries in ways that can bring transformative applications or solutions for jobs, endemic socioeconomic challenges, or livelihoods, the chief adviser added.

About public health, he said in WHO, as Bangladesh leads the negotiations on a global pandemic treaty, it urges for convergence on the key provisions of adequate international cooperation, financing public health systems, technology transfer, research and development, diversification of production of medical diagnostics-vaccines-therapeutics.

Stressing the need for declaring vaccines a 'global public good' that is free from the rigours of intellectual property, the Nobel Laureate said that these are also crucial for combating the scourge of non-communicable diseases.

Referring to this year's golden jubilee celebration of Bangladesh's partnership with the United Nations, he said it has been a shared journey of mutual learning.

"In our modest ways, Bangladesh contributed towards promoting global peace and security, justice, equality, human rights, social progress and prosperity. And, indeed in building a rules-based international order," he said.​
 

Can AI help mitigate long-pending legal cases?

1727828647598.png


Bangladesh's legal system is overwhelmed by a staggering backlog of cases, leaving many people waiting for justice for years, sometimes even decades. The legal process has become slow and frustrating with nearly 49 lakh cases pending and a severe shortage of judges, according to a newspaper report. Public trust in the judiciary has eroded, and finding solutions to this crisis has become critical. About 70 percent of the cases are backlogged at the witness hearing stage for at least three or more years, whereas 22 percent are backlogged at the investigation stage for one year and above, media reports say. One potential answer lies in using Artificial Intelligence (AI), which could offer much-needed efficiency and innovation to tackle these challenges.

SCALE OF THE PROBLEM

With only one judge for every 95 thousand citizens, Bangladesh's courts are stretched to the limit. As a result, cases drag on for years. This inefficiency is not just inconvenient, it is a denial of timely justice, which affects individuals, families and businesses alike.

HOW AI CAN HELP

Automating document creation and management:
AI can help lawyers draft legal documents quickly and accurately. By generating first drafts of contracts or legal papers using templates, AI tools can save lawyers valuable time. Instead of being bogged down in repetitive tasks, legal professionals can focus on more complex issues, helping to move cases forward faster.

Enhancing legal research: Legal research is time-consuming, but AI can change that. AI-powered tools can skim through enormous amounts of data, case laws and statutes, providing quick access to relevant information.

Task management and scheduling: AI can take over the mundane yet critical task of managing lawyers' and judges' schedules. It can remind them of deadlines, upcoming court dates, and pending tasks.

Training junior lawyers: AI can act as a virtual mentor for junior lawyers, helping them learn faster. AI tools can simulate courtroom scenarios, provide real-time feedback on legal drafts and even conduct mock trials.

LOCAL SOLUTIONS FOR LOCAL PROBLEMS

Bangladesh has the potential to create AI solutions tailored to the specific needs of its legal system. Local tech companies are in a unique position to design tools that understand the context of Bangladeshi law. By investing in these technologies, Bangladesh can develop affordable solutions that will help clear the case backlog. Collaboration between the legal community, tech companies, and institutions like the Supreme Court and Bangladesh Bar Council is crucial for success.

Oleyn, a Bangladeshi-Singaporean tech company, is already developing AI-driven solutions through its product "superattorney.ai". Salman Sayeed, co-founder and CEO of Oleyn, said their innovative platform transforms legal services by scaling up operations at a low cost, addressing the high demand for legal assistance while keeping expenses low for clients and increasing revenue for lawyers.

VOICES FROM THE LEGAL INDUSTRY

Lawyer Raiyan Amin points out that "AI can help automate repetitive tasks such as case management, legal research and data entry," but adds that "AI should just assist and mustn't replace human judgment." Barrister Rafaelur Rahman Mehedi agrees, saying, "AI can help with drafting, legal databases and recording court statements, but trust and confidentiality are crucial in law, and we must be careful with AI's role in this."

CONCLUSION

AI has the potential to bring long-overdue changes to Bangladesh's legal system. By streamlining routine tasks, improving research and supporting lawyers, AI can help clear the backlog of cases and speed up justice. As local companies like Oleyn step up to provide innovative solutions, Bangladesh's legal landscape could soon see a much-needed transformation, ensuring that justice is delivered on time.

The author is the chief of staff of a leading startup and a former president of Junior Chamber International (JCI) Bangladesh​
 

Is AI modern Frankenstein?
SYED FATTAHUL ALIM
Published :
Oct 14, 2024 21:56
Updated :
Oct 14, 2024 21:56

1728953517163.png


A cognitive psychologist and computer scientist, the British-Canadian, Geoffrey E. Hinton who along with the American, John J. Hopfield, was awarded the Nobel Prize in Physics by the Royal Swedish Academy of Sciences on October 8 is himself fearful of the invention that brought him the honour. Upon leaving Google in May 2023 after working there for a decade, he admitted that he left the job to speak freely about the dangers of AI. To him, AI is outpacing human's ability to control it. Consider the frustration of the AI buffs, not less Google, who held him in high regard for his pioneering work on deep learning!The reason the Nobel Committee considered them for the Prize (in Physics) is their use of statistical physics concepts in the development of artificial intelligence. John J. Hopfield, a physicist- turned-chemist-turned-biologist at the California Institute of Technology (Caltech) in 1982 proposed a simple (neural) network on how memories are stored in the brain. He later returned to Princeton as a molecular biologist.That means, neither scientist was a practising physicist when they got the Nobel Prize in Physics. Interestingly, though these two Nobel laureates in Physics got the prize for their seminal works in the advancement of AI, yet both of them expressed concerns about further development of the field they dedicated their career for. However, unlike Geoffrey Hinton, John Hopfield was less dramatic, though no less apprehensive, about expressing his fears about neural network he worked for that mimics the function of the human brain. Maybe, AI does it better than human brain and, what is alarming, even faster! He also warned of potential catastrophes if the advancements in AI research are not properly managed. So, he emphasised the need for deeper understanding of the deep learning systems so the technological development in the field may not go out of control.

The concerns raised by these two lead researchers in AI's advancement, call to mind the Asilomar conference (organised at the Asilomar State Beach in California, USA) of biotechnologists on recombinant DNA molecules in 1975. They discussed potential hazards and the need for regulation of biotechnology. Some 140 biologists, lawyers and physicists participated in the conference and they drew up a set of voluntary guidelines to ensure safety of the recombinant DNA technology, which is about genetic engineering technique that involves combining DNA from different species or creating new genes to alter an organism's genetic makeup.

Geoffrey Hinton in his interview with the website of Nobel Prize stressed thatAI is indeed an existential threat but we still do not know how to tackle it. There are some existential threats like climate change. But not only scientists, the general public also knows that by not burning fossil fuels and cutting down trees, the danger can be averted. That means, humanity knows the answer to the threat posed by the climate change, but it is the greedy businesses and politicians lacking the will who are coming in the way of addressing the threat.

To avert the threat to humanity originating from unregulated AI, mobilising resources by tech companies to conduct research on safety measures is necessary.

Hinton thinks that the linguistics school of Noam Chomsky, for instance, is quite dismissive about AI's capacity for understanding things the way humans do. They (neural networks of AI) cannot process language like humans, the Chomsky School holds.

But Geoffrey Hinton thinks this notion is wrong, since, in his view, neural nets do the job (of processing language) better than what the Chomsky School of Linguistics might imagine.

The harm AI can do is already before all to see. These include AI-generated photos, videos and texts that are flooding the internet. The problem is it is hard to tell the real contents from the fake ones. It can replace jobs, build lethal autonomous weapons by themselves and so on. Here lies the existential threat.​
 
Back