[🇧🇩] - Artificial Intelligence-----It's challenges and Prospects in Bangladesh | World Defense Forum
Reply

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh

  • Thread starter Thread starter Saif
  • Start date Start date
  • Replies Replies 15
  • Views Views 558
G Bangladesh Defense Forum

Saif

Senior Member
Jan 24, 2024
7,106
2,845
Origin

Residence

Axis Group

Govt drafts AI policy to tap its potential, tackle concerns


1720572584409.png



The government has formulated the draft National AI policy as it looks to make the best use of artificial intelligence to raise productivity and spur economic growth while dealing with the concerns presented by the technology spreading at a breakneck pace.

"This policy seeks to harness the benefits of AI while mitigating its risks, fostering innovation, and ensuring that AI technologies serve the best interests of the citizens and the nation as a whole," the draft said.

The Information and Communication Technology Division prepared the National AI Policy and published it recently.

The policy is expected to address the legal, ethical, and societal implications of AI effectively and efficiently.

It has placed a significant emphasis on public awareness and education, enlightening citizens about AI and its far-reaching benefits.

The objectives of the policy are to accelerate equitable economic growth and productivity through AI-driven optimisation, forecasting, and data-driven decision-making, and ensure efficiency and accessibility of public services through AI-enabled personalisation.

The draft comes as countries around the world race to prepare to deal with the changes being brought about by the fast-evolving technology.

The International Monetary Fund (IMF) has published its new AI Preparedness Index Dashboard for 174 economies, based on their readiness in four areas: digital infrastructure, human capital and labour market policies, innovation and economic integration, and regulation.

It showed Bangladesh's score stands at 0.38 compared to 0.49 of India, 0.37 of Pakistan, 0.35 of Nepal, 0.44 of Sri Lanka, 0.77 of the US, 0.64 of China, and 0.73 of Australia. Developed countries have a score of at least 0.7.

In Bangladesh, the government plans to adopt data-driven policy-making in every sector through AI-supported analytics and insights and nurture a skilled workforce that can utilise and build AI technologies.

It wants to embed AI in education and skills development so that the largely young population can meet the demands of the future.

The draft said the country will also foster a culture of AI research and innovation through public and private funding. It will ensure development and adhere to a robust ethical framework by establishing regulatory measures that uphold human rights in AI development and deployment.

The ICT Division, in collaboration with relevant ministries, industry, academia, and civil society, will take necessary steps to establish the institutional framework for the policy implementation, officials said.

It will set up an independent National Artificial Intelligence Center of Excellence (NAICE).

The NAICE will be responsible for coordination and monitoring of AI initiatives using key performance indicators and evaluation of AI initiatives' social, economic, and environmental impacts, guiding adjustments for maximum benefits and risk mitigation.

It will facilitate collaboration and knowledge-sharing among stakeholders, including government agencies, industry, academia, and civil society. It will ensure that any measures taken to regulate the technology are proportional to the risk and balanced to encourage innovation.

The government will form a high-level national AI advisory council to guide the implementation of sectoral AI initiatives.

The draft said the legal and regulatory frameworks are necessary for AI policy implementation.

The National Strategy for AI will be framed, and it will be updated every two years in accordance with the advancement of AI worldwide.

The strategy will include data retention policies, deal with the legal issues of data governance and ownership and focus on interoperability and data exchange.

According to IMF's economist Giovanni Melina, AI can increase productivity, boost economic growth, and lift incomes. However, it could also wipe out millions of jobs and widen inequality.

IMF's research has shown how AI is poised to reshape the global economy. It could endanger 33 percent of jobs in advanced economies, 24 percent in emerging economies, and 18 percent in low-income countries.

But, on the brighter side, it also brings enormous potential to enhance the productivity of existing jobs for which AI can be a complementary tool and to create new jobs and even new industries.

Melina said most emerging market economies and low-income countries have smaller shares of high-skilled jobs than advanced economies, and so will likely be less affected and face fewer immediate disruptions from AI.

"At the same time, many of these countries lack the infrastructure or skilled workforces needed to harness AI's benefits, which could worsen inequality among nations."

The economist said the policy priority for emerging markets and developing economies should be to lay a strong foundation by investing in digital infrastructure and digital training for workers.​
 

Vision for an AI law in Bangladesh
1720744477457.png

VISUAL: STAR

We may not notice it at first glance, but the world is going through a technological renaissance in the form of Artificial Intelligence. In the bustling heart of South Asia, Bangladesh also stands on the cusp of this technological renaissance. As artificial intelligence (AI) is transforming industries worldwide, the nation is facing an urgent call to draft forward-thinking AI policy guidelines. Imagine a future where Dhaka's traffic is managed by smart systems, farmers use AI to boost crop yields, and healthcare is revolutionised by predictive analytics. In the current situation, this thought might be farfetched, but this promising horizon can come sooner rather than later if Bangladesh can establish a policy guideline that can navigate its set of challenges and responsibilities. Bangladesh must utilise the AI wave with a blend of ambition and caution, ensuring that innovation does not eclipse ethics and inclusivity.

Crafting Bangladesh's AI policy is a bit like preparing the perfect biriyani: it requires just the right mix of ingredients to create something truly remarkable. The focus should be on blending robust international cooperation to ensure trustworthy AI, with a generous helping of digital infrastructure to provide the computing power needed for innovation. Stir in public awareness and civic engagement to keep society informed and involved, sprinkle generously with investments in public research capabilities, and don't forget to season with education and skill development to prepare the workforce for future challenges. Finally, top it off with initiatives to boost connectivity and digitalisation, and a strategic vision aligned with the Fourth Industrial Revolution (4IR) to ensure sustainable growth. When these elements come together, Bangladesh can set up a tech policy that's not only cutting-edge, but also inclusive and forward-thinking. So, what elements should be on the radar of policymakers when crafting this pivotal piece of legislation?

First of all, AI's transformative potential can only be fully realised if there's a skilled workforce to harness it. Thus, the AI Act must prioritise comprehensive education and training programmes. Integrating AI into school curriculum, establishing vocational training programmes, and offering scholarships for AI-related fields are essential steps. This not only prepares the next generation for AI-driven jobs but also ensures that the current workforce is not left behind.

To position Bangladesh as a leader in AI innovation, substantial investments in AI research and development (R&D) are crucial. This involves funding interdisciplinary research collaborations between universities, research institutions, and private industries. The goal should be to advance AI capabilities in key sectors like healthcare, agriculture, and manufacturing. Think about AI research centres and innovation hubs sprouting across the country, nurturing a vibrant ecosystem of startups and entrepreneurs. We need to build a digital infrastructure. For example, we can talk about the recent policy on AI that was adopted by Vietnam. Within 2030, they want to build an "AI ecosystem." It means a fluid incorporation of AI in the commercial and research sectors. This ecosystem may include various components that work together to foster innovation, ensure ethical standards, and drive the economic and societal benefits in the country. Most importantly, to foster AI innovation in Vietnam, the government is taking a comprehensive strategy focusing on human resource development, organisational construction, research and development, and investment in AI enterprises. This includes deploying basic AI and data science skills through short and medium-term training courses for students and career-changing workers, attracting both domestic and foreign resources to build training centres, and establishing key research hubs at leading universities. Ultimately, the Vietnamese government wants to promote and attract investment capital for the growth of AI enterprises and brands in Vietnam.

Similarly, for AI to flourish, Bangladesh needs to prioritise investment in high-speed internet connectivity, cloud computing services, and data centres. This infrastructure will provide the necessary computing power and data storage capabilities for AI applications across various sectors. Building this infrastructure is another difficult but manageable task with the right collaborative efforts. If the government can encourage collaborations between itself, the private sector, and academia then Bangladesh can create a fertile ground for AI advancements tailored to local needs. And not only through local collaboration, but Bangladesh should also establish partnerships with other countries, international organisations, and tech companies to develop shared principles and standards for trustworthy AI. This can also position the country as a trustworthy player in the global AI landscape, attracting international investments and collaborations.

We have to remember that, in the current AI-driven world, data is the new oil. However, with great data comes great responsibility. Ensuring a robust data security infrastructure is paramount. The AI Act should mandate the use of cutting-edge encryption technologies, continuous monitoring systems, and regular security audits to protect sensitive information from cyber threats. This is not just about safeguarding individual privacy but also about building trust in AI systems. A data breach in a healthcare system powered by AI could have catastrophic consequences, undermining public confidence. Therefore, a comprehensive focus on data security is non-negotiable.

Another thing to note is that AI is a double-edged sword. While it has the potential to drive unprecedented progress, it also poses significant ethical challenges. The AI Act should establish clear guidelines for ethical AI deployment, ensuring transparency, accountability, and fairness. It is very easy to misuse AI and promote misinformation or biases. Nowadays, AI can create realistic but fake images, videos, and audio (deepfakes), which can be used to spread misinformation, manipulate public opinion, or damage reputations. It also increases fraud and scams because there are no established guidelines to use of AI systems. AI can perpetuate and amplify biases present in their training data, leading to discriminatory practices in hiring, lending, law enforcement, and more. So, establishing data protection laws to safeguard personal information and fostering transparency in AI decision-making processes are critical steps toward ethical AI.

All in all, crafting an AI Act for Bangladesh goes beyond technicalities, it centres on people and public engagement. Essential initiatives include public awareness campaigns to educate citizens on AI's benefits and risks, and involving diverse stakeholders—civil society, academia, and industry—in policymaking to ensure transparency and accountability. The Act should also align with Fourth Industrial Revolution (4IR) principles which emphasises innovation, digitalisation, and interconnectedness. By balancing innovation with regulation and addressing local needs while adhering to global standards, Bangladesh can harness AI's transformative power for an inclusive, ethical, and sustainable digital future.

This is Part II of a two-part series. The first part was published in Page 9 of The Daily Star on June 23, 2024.

Warda Ruheen Bristi and Shafin Haque Omlan are research associates at Bangladesh Institute of Governance and Management (BIGM).​
 

Why we need an AI law in Bangladesh
1720744633670.png

Visual: Star

On March 13, 2024, the European Union (EU) passed the world's first Artificial Intelligence (AI) Act. In an era defined by rapid technological advancement, the rise of AI presents both unprecedented opportunities and complex challenges for nations worldwide, and the EU has become the first entity to formally address those. In Bangladesh, a country in the midst of a rapid technological evolution, the emergence of AI stands as a pivotal moment with far-reaching implications. As the nation seeks to harness the potential of AI for societal advancement and economic growth, the importance of establishing a robust legal framework to govern AI-related issues cannot be overstated.

First, let's talk about the potential of AI. In short, it is vast and varied, touching virtually every aspect of human life. AI has the capability to automate repetitive tasks across various industries, freeing up human workers to focus on more creative and strategic endeavours. In healthcare, AI can assist in diagnosing diseases, analysing medical images, and personalising treatment plans, leading to more accurate and efficient healthcare delivery. Similarly, AI has the potential to revolutionise education through personalised learning experiences, adaptive tutoring systems, and virtual classroom assistants. In transportation, AI-powered autonomous vehicles can enhance road safety, reduce traffic congestion, and improve transportation efficiency. The financial sector can benefit from AI algorithms used for fraud detection, risk assessment, portfolio management, and customer service. Additionally, AI can contribute to environmental conservation efforts by analysing large datasets to monitor and predict environmental changes, optimise resource management, and develop sustainable solutions. In the realm of entertainment, AI-generated content such as music, art and literature can inspire creativity and offer new forms of entertainment.

Already, multiple countries have approved or are on their way to approving regulations regarding AI. The EU's AI Act is helping them focus on establishing a common cybersecurity framework across the bloc, enhancing security certifications for ICT products, and strengthening the role of the European Union Agency for Cybersecurity. The EU has also drafted a Cybersecurity Act. India has passed the Digital India Act, 2023, which aims to update and modernise India's digital governance framework. It addresses cybersecurity, data privacy, and ethical AI use. Vietnam also has approved a national digital transformation plan, which aims at promoting a digital transition in governance, the economy, and society more broadly, as well as establishing Vietnamese technology firms as global players. Under this plan, several goals are laid out to be achieved by 2025. They have also developed a national strategy for research, development and application of AI by 2030. This strategy outlines a number of key goals and directives for developing AI technology in Vietnam. It's clear that Vietnam is committed to a digital transition and cannot ignore the role that artificial intelligence will play to that end.

Similarly, our aspirations for a Digital Bangladesh hinges on our ability to navigate the complexities of AI responsibly. When talking about establishing a regulatory framework for Bangladesh, the heart of the discussion lies within the imperative to strike a balance between fostering innovation and safeguarding fundamental rights. A balanced regulatory framework is essential not only for spurring innovation and attracting investment, but also for safeguarding against potential risks associated with AI.
Aside from the potential of AI, there are concerning reasons too for which an AI law is necessary. AI poses several risks to personal and organisational safety, hence it must be carefully managed to ensure responsible and ethical use of the technology. For individuals, AI can compromise privacy if systems improperly collect, store, or use personal data without consent or appropriate safeguards. This can lead to identity theft, unauthorised surveillance, and exploitation of personal information. We are already seeing lots of examples of deepfake videos where the faces of famous personalities are used in spreading rumours or damaging their reputation. If AI is not regulated quickly, these incidents may soon get out of hand, and for a country like Bangladesh, where people are somewhat susceptible to rumours, they might spread like wildfire. AI systems may also perpetuate bias or discrimination if not properly designed and vetted, impacting individuals unfairly in areas such as hiring, lending or legal judgments.

Moreover, in order to shed light on Bangladesh's aspiration to become a digital powerhouse and a hub for innovation, the enactment of an AI law is instrumental in enhancing the country's competitiveness and global standing. By aligning with international best practices and standards, the country can foster international collaboration, attract foreign investment, and strengthen its position in the global AI landscape. The timely development of an AI act will present an opportunity for Bangladesh to showcase its commitment to ethical AI governance and responsible innovation to the rest of the world. By engaging with stakeholders across the government, industry, academia, and civil society, policymakers can leverage diverse perspectives to develop inclusive and forward-thinking AI policies that reflect the country's values and priorities.

Seizing the momentum of the current trend and developing a legal framework for AI-related issues is crucial for Bangladesh's continued progress in the digital age. Aligning with international best practices and standards is essential to enhance the country's competitiveness and credibility in the global AI landscape. By demonstrating a commitment to ethical AI governance, Bangladesh can attract foreign investment, foster international collaboration, and strengthen its position as a leader in responsible AI development.

This article is Part I of a two-part series.

Warda Ruheen Bristi and Shafin Haque Omlan are research associates at Bangladesh Institute of Governance and Management (BIGM).​
 

We need to act on AI now, not have an act for it
1720744791737.png

Visual: Shaikh Sultana Jahan Badhon
When Bangladesh embarked on its journey towards Digital Bangladesh in 2009, many were sceptical about it. But as time progressed, we all saw how the vision started to become a reality.

This vision, at its core, aspires to create a nation that is adept at solving problems at all spheres of life through innovative application of digital technologies. The government has made it abundantly clear that Artificial Intelligence (AI) is going to play a pivotal role in implementing the Smart Bangladesh vision. Following this vision, the government has recently unveiled a draft National Artificial Intelligence (AI) Policy 2024 for public consultation.

There is a good reason why the government has decided to use AI as the fulcrum to realise the goal of Smart Bangladesh. Unlike other digital technologies, the potential of application of AI is literally all around us. Starting from our personal lives, to modernising public service delivery, the scope for AI is limitless.

Be it public transport or AI-driven personal vehicles, personal healthcare solutions or the public healthcare system, from individual human resource productivity or national competitiveness in productivity levels, every imaginable aspect of our individual, societal, as well as national issues can have a transformative impact if we can smartly apply AI to solve our problems.

But the question is this: how do we facilitate AI to deliver the dividends for us? If we look around, we can see that every country in the world is trying to strike a balance between innovation and regulatory oversight. There is palpable consensus on adopting more of a business-friendly approach to AI regulation, by avoiding excessive restrictions.

The government has been trying to create a pathway for AI in Bangladesh by preparing the National Strategy for AI in 2020, followed by the recent release of the draft AI Policy in 2024. Having read the draft policy on AI, I felt that it provides an excellent template to foment the use of AI in every sector. The institutional framework outlined in the policy to pursue AI projects is well thought through. On top of that, the sectoral plans for application of AI provides an excellent starting point.

But what puzzles me is the stated desire of the government to introduce an Act for AI. When we are supposed to allow as much room as possible for our AI practitioners to fully demonstrate their talent, we are planning to limit what they can and can't do along with defined punitive measures through the AI Act. I am certain that this is not how you invite people into the fold of new technology.

As of now, the European Union (EU) is the only entity to have enacted an AI Act. At the heart of the Act, it is mandatory to ensure that AI platforms are monitored or overseen by human beings, not another AI platform. It's worth noting that many AI experts have termed this as a knee-jerk reaction as they consider a law on AI to be too premature at this stage.

The US does not have a federal law covering AI, nor is there any universal definition for AI. It is currently governed by a mix of decentralised existing federal and state legislations, industry itself and the courts. Through an executive order last year, every US government agency was tasked to set up working groups to evaluate AI, develop regulations and establish public-private engagement.

In United Kingdom (UK), the government has unveiled its response to AI Regulation White Paper consultation in February 2024. They don't have any plans to codify that into law for now. It advocates a context-sensitive, balanced approach, using existing sector-specific laws for AI guidance.

In India, the upcoming Digital India Act is set to focus on the regulation of high-risk AI applications. No plan to enact separate legislation is afoot. Singapore also doesn't have any AI legislation; they have a sector specific approach to overall governance and regulation. Japan also has a relatively hands-off approach and has been encouraging AI development and application across various sectors.

The Association of South East Asian Nations (ASEAN) has issued a guide to AI governance and ethics in February 2024. The national-level recommendations include nurturing AI talent, upskilling workforces and investing in AI research and development. Australia also doesn't have any AI legislation; the government there is approaching it with a voluntary ethics framework.

It's worth noting that the core purpose of having a law is to create a framework for dos and don'ts in a particular area with the option to resort to the legal system to settle disputes or punish offenders of the law. The question here is, how do we know what is doable and what is not, when we don't have any prior experience with AI in Bangladesh.

Even if we consider enacting a law, we need to ascertain areas where government regulation is needed, in light of the global best practices. AI law or policy considerations should include the use and processing of personal data, privacy, infringement, surveillance, algorithm bias in customer interactions, data sovereignty, monitoring AI based platforms, cybersecurity, and social norms and values etc. Most importantly, we need to focus on the fundamental ethical aspects of AI, which are more universally agreed upon compared to specific AI regulations.

We must realise innovation involves a very messy and unstructured process. The key to innovation is to have a creative mindset that can go beyond conventional thinking to come up with the simplest of solutions to complex problems. Putting barriers on this through an AI Act is the last thing we need at this moment.

If we want to meet the export earnings target of $5 billion from the ICT sector, we need to facilitate our developers to catch up with the rapid pace of AI development globally, instead of scaring them off with an act that comes with punitive measures. More AI regulation risks stifling new start-ups who lack the resources of the globally dominant platforms. We need to focus on creating a large pool of highly skilled human resources in AI. The draft AI policy provides a baseline to embark on this AI journey.

Shahed Alam is a barrister and telecom expert.​
 

Why AI tech needs to be democratised

1720744943574.png

We must take seriously the legitimate concerns about AI that have been raised. FILE PHOTO: REUTERS

With the introduction of "large language models" (LLMs) in our day-to-day lives, artificial intelligence (AI) systems have experienced a sharp surge in popularity. It is already apparent that the usage of AI systems will drastically impact our professional lives, private lives, and – perhaps most crucially – how we structure and govern our societies. This isn't because algorithms are inherently more innovative than people; instead, they provide economic stability and efficiency in completing many simple and complex tasks at a level that many humans cannot match.

The introduction of AI systems in public administration and the judicial system, as well as their use concerning the provision of certain essential services by private actors, raises serious concerns about how to safeguard the sustained protection of human rights and democracy and respect for the rule of law, if AI systems assist or even replace human decision-makers. This contrasts with the general public debate, which focuses on the AI technology's economic benefits and drawbacks. The very foundations of liberal democracy, such as elections, the freedom to assemble and establish associations, and the right to have opinions and to receive or disseminate information, may all be severely impacted by their use.

Recent calls for a ban on AI technology have come from influential voices in the public discourse who believe that the risks it brings exceed its benefits. Though we must acknowledge that the genie is out of the bottle and that there is no practical way to turn back the scientific and technological advancements that have made it possible to develop sophisticated and potent AI systems, we must also take seriously the legitimate concerns about AI that have been raised.

The Council of Europe (CoE), the oldest intergovernmental regional organisation, with 46 member-states and perhaps best known globally for its European Court of Human Rights (ECtHR), started groundbreaking research on the viability and necessity of an international treaty on AI based on its own and other pertinent international legal norms in the fields of democratic values, human rights advocacy, and the rule of law commitments in 2019. The Committee on Artificial Intelligence (CAI), formed for the period of 2022-2024, is tasked with developing an AI framework convention that will outline legally binding standards, guidelines, rights, and obligations regarding the creation, development, application, and decommissioning of AI systems from the perspectives of human rights, democracy, and the rule of law.

It will take a coordinated effort from like-minded states and assistance from civil society, the tech sector, and academics to complete this enormous undertaking. Our hope and ambition is that the Council of Europe's AI framework convention will provide much-needed legal clarity and guarantees of the protection of fundamental rights.

But a genuine setup of standards for the human rights and democratic features of AI systems cannot be restricted to a particular region, because AI technology knows no borders. As a result, the CoE's Committee of Ministers decided to permit interested non-European states that share its goals and ideals to participate in the negotiations, and an increasing number of these states have already signed on or are actively working to join the efforts.

The European Union (EU), which regulates AI systems for its 27 member-states, is also directly involved in the CoE negotiations. The AI Act of the EU and the CoE's framework convention are designed to complement one another when they go into effect, showing how to effectively utilise the joint capabilities and skills of the two European entities. The draft framework convention is aimed at ensuring that the use of AI technology does not result in a legal vacuum regarding the protection of human rights, the operation of democracy and democratic processes, or the observance of the rule of law (a consolidated "working draft" is publicly available at the CoE website for the CAI).

To this end, parties must obligate regulators, developers, providers, and other AI players to consider dangers to human rights, democracy, and the rule of law from the moment these systems are conceived and throughout their existence. In addition, the legal system that victims of human rights breaches have access to should be modified in light of the unique difficulties that AI technologies present, such as their transparency and rationalisation.

The treaty will also specifically address the potential risks to democracy and democratic processes posed by AI technology. This includes the use of the so-called "deep fakes," microtargeting, or more overt violations of the freedoms of expression, association, opinion formation, and the ability to obtain and disseminate information. The framework convention will include enforceable duties for its parties to offer such practices with adequate protection. When developing and employing AI systems that may be used in sensitive contexts, including but not limited to the drafting of laws, public administration, and last but not least, the administration of justice through the courts of law, it is evident that the fundamental idea of what constitutes a just and liberal, law-abiding society must be respected. The framework convention will also specify the parties' precise obligations in this area.

The draft framework convention, as well as all of CAI's work, prioritises human dignity and agency by taking a Harmonised Risk-Driven Approach (HRDA) to design, develop, use and decommission AI systems. It's crucial to carefully analyse any potential adverse effects of deploying AI systems in diverse circumstances before getting carried away by the apparent possibilities given by this technology. Therefore, parties are also required by the proposed framework convention to spread knowledge about AI technology and to encourage an informed public discussion about its proper application.

To ensure that as many people as possible profit from AI and other digital technologies and are protected from their misuse, the realistic approach must be to discover responsible methods to use them. It will take a coordinated effort from like-minded states and assistance from civil society, the tech sector, and academics to complete this enormous undertaking. Our hope and ambition is that the Council of Europe's AI framework convention will provide much-needed legal clarity and guarantees of the protection of fundamental rights.

Dr Nafees Ahmad is associate professor at the Faculty of Legal Studies in South Asian University, New Delhi.​
 

Artificial intelligence is still far from being 'intelligent'
Why does Big Tech want an immediate six-month pause on any further development?

1720745094066.png

An image of a robot taking a picture, generated by AI software Midjourney. SOURCE: REUTERS

Is artificial intelligence (AI) really "intelligent" in its creativity and decision-making? Or is it stealing others' works and perpetuating existing human biases?

This January, three artists filed a class-action lawsuit with the Northern California District Court against AI imagery generators – Midjourney, Stable Diffusion, and DreamUp. They claimed these companies are using their artwork to generate newer ones – using a publicly available database of images including theirs called LAION-5B – though the artists had not consented to have their copyrighted artworks to be included in the database, were not compensated for the use of their works, and their influence was not credited when AI images were produced using said works.

AI is literally scraping through billions of existing works produced by raw human labour to "produce newer ones." That's why several experts are already asking whether AI is at all "artificial" or "intelligent."

Tech writer Evgeny Morozov has argued that while the early AI systems were mostly rules and programmes, and could have some "artificiality," today's AI models draw their strength entirely from the works of actual humans. Built on vast amounts of human work stored at mammoth energy-hungry data centres, AI is not "intelligent" in the way human intelligence is as it cannot discern things without extensive human training, as Microsoft's Kate Crawford has pointed out.

Even in decision making, AI-models can have strong biases as a 2019 article published in Nature has confirmed. An algorithm common in US hospitals has been systematically discriminating against black people. The study found that hospitals traditionally assign them lower risk scores than white people. Automatically, the algorithm takes that as a cue and puts blacks in a lesser risk group, regardless of the prevailing medical conditions. In another case, a painting bot returned the image of a salmon steak in water when asked to draw a swimming salmon. The AI model couldn't make this simple judgement that even a toddler could do.

However, despite not being anywhere near "intelligent," recent developments, especially the release of ChatGPT in November last year, have raised dramatic concerns about the effects of AI on human society. Renowned tech experts have published an open letter calling for an immediate pause on all AI development for six months. Its signatories include many big names and AI heavyweights, including Elon Musk from Tesla, Emad Mostaque from Stability AI, Sam Altman from OpenAI, Demis Hassabis from Google's DeepMind, and Kevin Scott from Microsoft. Altman even advised the US government to issue licenses to trusted companies (Does this mean only Big Techs?) to train AI models.

Is this call for an immediate pause coming from genuine concern for human well-being? Or is there a commercial motive, as Michael Bennett, a PhD student at Australian National University (ANU) has pointed out? Potentially, AI can generate an enormous amount of wealth for whoever controls it. Let's try to understand the premise of the call.

ChatGPT isn't a research breakthrough, it's a product based on open research work that is already a few years old. The only difference is that the technology was not widely available through a convenient interface. Smaller entrepreneurs will soon develop better and more efficient AI-based models at much lesser costs, some of which is already available at GitHub, a popular repository for open-source non-commercial software. That worries the Big Techs, made abundantly clear by a leaked Google internal memo.

The long memo from a Google researcher said, "People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality…We Have No Moat." Licenses would be a "kinda moat," as Stability AI's CEO Emad Mostaque puts it bluntly, moat being jargon for a way to secure a business against competitors.

AI Now Institute, a research non-profit that addresses the concentration of power in the tech industry, highlights the perils of unregulated AI in its April 2023 report because the AI boom will make the powerful Big Techs even more powerful. AI models depend on vast amounts of data, and super-fast computing power to process it, both of which only Big Techs can afford. Without access to these resources, no entrepreneur or researcher can develop any meaningful AI application, as an MIT Technical Review article elaborates.

Yes, we need regulations for AI development, and a pause if necessary, but not for the reasons mentioned in the open letter. It's to ensure that AI technology remains open source and democratic.

The other reason AI should be regulated is the way social media platforms have used it to fuel gender bias and extreme polarisation, and played on social divisions resulting in unspeakable violence on a massive scale (such as in Myanmar using Facebook). AI models will amplify both intentional misinformation (simple inaccuracies) and disinformation (false information) simply because they are trained on such data to produce more data (model cannibalism effect). Large language models can keep repeating fabricated and false information because of a phenomenon called 'hallucination' which independent watchdog NewsGuard has found in several online news portals.

Intentional or otherwise, all these could be quite handy in manipulating public opinion or creating biases to benefit those in power. That makes it even more necessary to regulate AI. To ensure that the benefits of AI reaches everyone, humans must always be on top of it.

Dr Sayeed Ahmed is a consulting engineer and the CEO of Bayside Analytix, a technology-focused strategy and management consulting organisation.​
 

Why Bangladesh should invest in artificial intelligence
1720745232914.png

In the age of 4IR, investing in artificial intelligence (AI) would be the right move for Bangladesh to accelerate its economic growth. File Photo: AFP

In the 1970s, American sociologist and economic historian Immanuel Wallerstein (1930-2019) proposed an approach to view the global economic system as an interplay between three groups of countries: core, semi-periphery, and periphery countries. The core countries possess the highest levels of skills and knowledge and the largest amount of capital. The semi-periphery countries serve this group with lower-skill, labour-intensive production and raw materials. The periphery countries, in turn, service both groups with even lower skill levels and more labour-intensive production methods. The approach later came to be known as the World Systems Theory.

The system is dynamic: a country may move up or down the hierarchy depending on its technology, capital, or knowledge. Such movements involve fundamental shifts in a country's social and economic systems—e.g. production, distribution, learning, and skill level. For example, India was once an agriculture-based economic powerhouse, and the European traders clamoured to import its products. But as Europe became industrialised, the importers soon became exporters, and India's agriculture and home-based small industries drastically declined. The money extracted from India fuelled the First Industrial Revolution (late 18th century), funded research and development, and expanded the Western countries' knowledge base. Yale, a highly regarded American university, benefitted from Elihu Yale's (1649-1721) donations, who earned a fortune from the slave trade in India.

The second (late 19th century) and the third (mid-20th century) industrial revolutions soon followed on the back of the first. Both revolutions caused the status of countries to remain generally static; the core countries stayed ahead of the others primarily because of their control on capital, besides knowledge and skill.

The world is now going through the Fourth Industrial Revolution (4IR). According to Klaus Schwab, founder and executive chairman of the World Economic Forum (WEF), the 4IR has transformed the world with an entirely new production, management, and governance system. It can potentially alter Wallerstein's World Systems Theory, because skill and innovation will determine a society's place in the future, reducing dependence on capital. It thus opens new opportunities for the non-core emerging economies to move up Wallerstein's ladder. Schwab added that artificial intelligence (AI) would be a crucial driver of the 4IR. A good thing about AI is that emerging economies can also benefit from this technology without cost-prohibitive investments. The International Finance Corporation (IFC) highlights the same point with ideas and case studies in emerging economies under its thought leadership programme. Below are some of such case studies.

AI for emerging economies

Any effective poverty alleviation initiative needs data to identify vulnerable groups. However, the unavailability of quality data often leads to poorly designed interventions—such as incorrect identification of a vulnerable group—and their eventual failure. AI can analyse satellite images to extract relevant information, such as distance from the nearest water sources or the urban market, crop status, and other relevant variables for detecting vulnerability.

Bengaluru, in Southern India, is experimenting with a system to monitor real-time camera feeds to control traffic lights. In Rwanda, commercial drones are flying medical supplies, such as blood, to remote locations faster than road transport. AI can correlate data from mobile phones with financial affordability, education level, and health status. Such data will allow mobile applications to deliver microlending, tailored education, disease diagnosis, and medication advice. With Natural Language Processing (NLP) tools, AI can cross literacy barriers and communicate directly with an individual in any language.

Options for Bangladesh

MIT professors Erik Brynjolfsson and Andrew McAfee believe that technology will create abundance, but not everyone will benefit equally. Those with talent will be more likely to secure the high-skilled, high-pay jobs, leaving the low-skilled, low-pay ones for the rest. The 4IR's impact on societies will be determined not by technologies, but by the choices one makes. What choice will Bangladesh make?

So far, Bangladesh's economy has been heavily dependent on low-cost products, such as garments—earning more than 80 percent of total annual exports, according to BGMEA—and remittance from low-skilled migrant workers—over USD 24 billion in 2020-21, according to Bangladesh Bank. Should it continue providing low-cost production and labour? Or can it train its abundant young population and make use of the opportunities presented by the 4IR?

How Bangladesh can benefit from 4IR

A Brac study, titled "Youths of Bangladesh: Agents of Change," offers some interesting insights. Bangladeshi youths are not yet prepared to take the opportunities provided by the 21st century (i.e. 4IR), and their potential remains vastly unrealised. It has just about 600,000 tech freelancers, although a whopping one-third of its 163 million (World Bank's 2019 estimate) people are between 15 and 35 years. With the right skills and investment, these youths could become game-changers.

Bangladesh adopted its AI strategy in March 2020, although there is no visible follow-up yet. China adopted its AI development plan in July 2017. Within merely four years, the sheer scale of China's drive towards AI implementation is mind-boggling, as the think tank New America reported in "From Riding a Wave to Full Steam Ahead." China's government entities, universities, research institutes, local bodies, and corporations are spearheading its AI vision of becoming the global leader by 2030. A Forbes article already views China as the world's first AI superpower.

But Bangladesh is not China. The two countries' social, political, and economic systems are vastly different. Bangladesh must find a path to reap the benefits of AI technology. Given its focus on science and technology, Bangladesh can start by setting up a few dedicated AI research institutes and attracting top talents to work for them. It can initiate AI-based research programmes targeting local problems such as Bangla NLP, manufacturing process automation, farming support, tailored education, or healthcare service to remote populations. Low-cost production base and unskilled labour would soon become redundant, just like horses no longer pull carts or carry coal from the mines. The only way to remain relevant is to adopt technology for faster and more equitable growth. Bangladesh cannot afford to miss the opportunity that 4IR offers.

Dr Sayeed Ahmed is a consulting engineer and the CEO of Bayside Analytix, a tech-focused strategy consulting organisation.​
 

AI's ethical challenges require a multifaceted approach: Palak

1720829010645.png


There needs to be a multifaceted approach to address the ethical challenges posed by artificial intelligence (AI), said State Minister for ICT Division Zunaid Ahmed Palak.

AI should be used to close the gap on digital divides and empower society, rather than worsen existing inequalities, he said.

He also called for robust policy frameworks, regulatory measures and international cooperation to address these challenges.

The minister made these remarks at a "National Stakeholder Consultation on Assessing AI Readiness of Bangladesh", organised by the ICT Division in partnership with Unesco and Aspire to Innovate (a2i) at the ICT Tower in the capital recently.

The event highlighted the country's proactive approach in integrating AI to achieve its Sustainable Development Goals, according to a press release from the ICT Division.

The government is focusing on capacity building and regulatory frameworks and policies that ensure the ethical deployment of AI technologies, it read.

This is being achieved through collaborations with international organisations such as the United Nations Educational, Scientific and Cultural Organization (Unesco) and the United Nations Development Programme (UNDP), it added.

Md Shamsul Arefin, secretary of the ICT division, said AI can positively contribute to society through the ethical use of its transformative powers.

Md Mahmudul Hossain Khan, secretary on coordination and reforms to the Cabinet Division, stressed the significance of identifying gaps, opportunities and challenges in AI adaptation to formulate effective and sustainable strategies.

The event also featured insights from international representatives.

Charles Whiteley, ambassador and head of delegation of the European Union in Bangladesh, and Huhua Fan, OIC head of the Unesco Office in Dhaka, noted the importance of a comprehensive evaluation of AI readiness.

They opined that legal, social, cultural, scientific, economic and technical dimensions should be taken into consideration in this regard.

The event also saw discussions on integrating safe, trusted, and ethical AI considerations into strategies across various sectors, including education, transportation and agriculture.

The participants attended panel discussions and sessions that focused on the ethical implications and societal impact of AI technologies.​
 

Can Bangladesh leverage AI for inclusive growth?

1726444092884.png

The adoption of artificial intelligence in Bangladesh is still in its infancy, both for AI solution providers and their clients. Image: Possessed Photography/ Unsplash.

Artificial Intelligence (AI) has emerged as a transformative force in the global economy, positioning itself as a cornerstone of the fourth industrial revolution. From theory to practical solutions, AI has proliferated over the last decade. Defined as a simulation of human intelligence, AI combines technologies like machine learning, natural language processing and robotics to solve business, economic and social problems.

The impact of AI on developing economies can be illustrated through the experiences of regions like India. The 2023 Ernst & Young report evaluated the contribution of generative AI to India's economy at approximately USD 1.2-1.5 trillion in the next seven years, potentially increasing GDP by 5.2%-7.9%. AI is remarkably used in India for fraud detection and risk management in financial services and personalised learning in education.

Companies like HDFC Bank use AI algorithms to analyse transaction patterns and detect anomalies, thereby preventing fraud. Likewise, AI-powered platforms like Byju's offer personalised learning experiences for students, adapting to their learning pace and style. This has democratised access to quality education, helping millions of students across the country improve their academic performance.

What sectors in Bangladesh can be impacted by leveraging AI?

According to the World Bank, Bangladesh's average growth rate over the decade was 6.6%, with 5.8% in 2023. PwC estimated AI's total contribution to the global economy at USD 17 trillion by 2030. Given AI's estimated contributions to the Indian economy, AI could potentially generate billions of dollars for the Bangladesh economy.

There are four sectors in Bangladesh where AI can create a critical impact: In healthcare, it can enhance diagnostic accuracy, assist in early disease detection, manage patient data, and personalise treatment plans. For agriculture, AI-driven solutions can improve crop yields, optimise resource management, and contribute to food security. In manufacturing, particularly the RMG sector, which accounts for 80% of export revenue, AI can streamline production processes, enable predictive maintenance, and facilitate smart quality control. AI integration in RMG can optimise supply chains, forecast demand, and enable personalised designs. AI can automate processes, improve fraud detection, offer data-driven trading decisions, and enhance financial inclusion in financial services.

Across these sectors, AI technologies such as machine learning, computer vision, and natural language processing can analyse large datasets, identify patterns, and make data-driven decisions, ultimately increasing efficiency, reducing costs, and driving innovation in Bangladesh's economy.

Prominent Bangladeshi organisations are already at the forefront of AI innovation. For example, Intelligent Machines (IM) is a leading AI company in Bangladesh dedicated to using AI capabilities to solve problems and drive efficiency across various sectors. IM has successfully implemented AI in companies across telecom, financial institutions, and fast-moving consumer goods. Their AI-based services are reported to provide solutions with over 90% accuracy. The results that IM has provided thus far through AI integration are noteworthy. Unilever has achieved a 260% stretch target in 2021 using Fordo, a precision marketing AI product. BAT gained 253% improvements in brand campaign execution accuracy in 2021 using Shobdo, a speech recognition AI product. bKash gained 76% productivity and a 15% monthly onboarding growth rate with the help of Nimonton and Biponon, two retail AI products. IDLC Finance processes CIB reports in under 30 minutes using Dharapat, a FinTech AI product. Finally, Telenor saved 92.5% of the cost in completing 25 million KYCs in Myanmar using Borno and Chotur, two document verification AI products.

What are the challenges in AI implementation, integration and regulation?

The adoption of artificial intelligence in Bangladesh is still in its infancy, both for AI solution providers and their clients. Companies are reluctant to embrace AI solutions, not only due to a lack of infrastructure but also because of a shortage of understanding of these solutions. There may also be a reluctance to embrace AI solutions due to data privacy concerns. Similarly, the local growth of any AI-based service provider hinges on the readiness of consumers to adopt the technology. Finally, data availability is also a concern since, without data synthesising, the precision of AI solutions depends on the amount of relevant data fed to AI bots for machine learning.

Given this scenario, it is challenging for the government to craft policies that properly regulate the use of AI. The dilemma becomes whether the government should allow AI to proliferate for the sake of innovation or whether strong regulations should be in place well beforehand to uphold data privacy. Perhaps the best practice would be to balance innovation and regulation equally.

What are the prospects of AI in Bangladesh?

AI is catalysing change, enhancing productivity and efficiency, fostering innovation, and creating new avenues for growth. By investing in AI education and infrastructure, Bangladesh can position itself as a hub for innovation, attracting investment and talent while unlocking new opportunities for socioeconomic development.

The ICT Division of the Bangladesh government drafted a National AI Policy to address the challenges of AI adoption and implementation. The policy expands to ten sectors: telecommunication, data governance, environment, energy, and climate change. It introduces a robust framework for ethics, data privacy, and security, proposing the establishment of an independent National AI Center of Excellence and a High-Level National AI Advisory Council for facilitating and regulating AI services. The policy also provides detailed implementation plans for government ministries, academia, and private institutions. The objective is continuous monitoring, evaluation, and alignment with global advancements. It also addresses other challenges more thoroughly, offering specific mitigation strategies for data privacy, cybersecurity, and risk management.

Comparing AI's potential impact on Bangladesh with other developing countries like India and Africa reveals similar opportunities for economic evolution. As AI continues to engage with every corner of society, education and awareness of its usage and benefits are paramount. Adopting best practices, investing in infrastructure, and fostering a culture of innovation will be crucial in harnessing AI's benefits.

Rafsan Zia is a Business Consultant at LightCastle.​
 

Yunus voices concern over development of 'autonomous intelligence'

1727482445350.png

Photo: PID

Chief Adviser Prof Muhammad Yunus yesterday expressed cautions over the development of "autonomous intelligence" that may pose threats to human existence.

"As the scientific community and the world of technology keeps moving on developing 'autonomous intelligence' -- artificial intelligence that propagates on its own without any human intervention -- we all need to be cautious of the possible impact on every human person or our societies, today and beyond," he told the 79th session of the United Nations General Assembly (UNGA) in New York.

Delivering his speech in Bangla, Prof Yunus said many have reasons to believe that unless autonomous intelligence develops in a responsible manner, it can pose threats to human existence.

"We are particularly enthused with the emergence of the artificial intelligence tools and applications. Our youth are excited with the prospect of fast-unfolding generative AI. They aspire to walk and work as global citizens," he said.

The chief adviser said the world needs to ensure that no youth in countries like Bangladesh get left behind in meaningfully reaping benefits from the AI-led transformation.

The world simultaneously needs to ensure that the development of artificial intelligence does not diminish the scope or demand for human labour, he said.

He said every year, nearly two and a half million Bangladeshis enter labour market. "In a large population where nearly two-thirds is young, Bangladesh is challenged to make learning suited to meet the needs of today and tomorrow," he added.

Prof Yunus observed that the world of work is changing where a younger person has to adapt constantly, re-skill, and adopt newer attitudes.

"As Bangladesh is set to graduate as a middle income country, we reckon the vital need to secure ourselves in terms of learning and technology," he added.

He said newer forms of collaboration are needed where global business and knowledge-holders connect to people's needs.

International cooperation should create space for developing countries in ways that can bring transformative applications or solutions for jobs, endemic socioeconomic challenges, or livelihoods, the chief adviser added.

About public health, he said in WHO, as Bangladesh leads the negotiations on a global pandemic treaty, it urges for convergence on the key provisions of adequate international cooperation, financing public health systems, technology transfer, research and development, diversification of production of medical diagnostics-vaccines-therapeutics.

Stressing the need for declaring vaccines a 'global public good' that is free from the rigours of intellectual property, the Nobel Laureate said that these are also crucial for combating the scourge of non-communicable diseases.

Referring to this year's golden jubilee celebration of Bangladesh's partnership with the United Nations, he said it has been a shared journey of mutual learning.

"In our modest ways, Bangladesh contributed towards promoting global peace and security, justice, equality, human rights, social progress and prosperity. And, indeed in building a rules-based international order," he said.​
 

Can AI help mitigate long-pending legal cases?

1727828647598.png


Bangladesh's legal system is overwhelmed by a staggering backlog of cases, leaving many people waiting for justice for years, sometimes even decades. The legal process has become slow and frustrating with nearly 49 lakh cases pending and a severe shortage of judges, according to a newspaper report. Public trust in the judiciary has eroded, and finding solutions to this crisis has become critical. About 70 percent of the cases are backlogged at the witness hearing stage for at least three or more years, whereas 22 percent are backlogged at the investigation stage for one year and above, media reports say. One potential answer lies in using Artificial Intelligence (AI), which could offer much-needed efficiency and innovation to tackle these challenges.

SCALE OF THE PROBLEM

With only one judge for every 95 thousand citizens, Bangladesh's courts are stretched to the limit. As a result, cases drag on for years. This inefficiency is not just inconvenient, it is a denial of timely justice, which affects individuals, families and businesses alike.

HOW AI CAN HELP

Automating document creation and management:
AI can help lawyers draft legal documents quickly and accurately. By generating first drafts of contracts or legal papers using templates, AI tools can save lawyers valuable time. Instead of being bogged down in repetitive tasks, legal professionals can focus on more complex issues, helping to move cases forward faster.

Enhancing legal research: Legal research is time-consuming, but AI can change that. AI-powered tools can skim through enormous amounts of data, case laws and statutes, providing quick access to relevant information.

Task management and scheduling: AI can take over the mundane yet critical task of managing lawyers' and judges' schedules. It can remind them of deadlines, upcoming court dates, and pending tasks.

Training junior lawyers: AI can act as a virtual mentor for junior lawyers, helping them learn faster. AI tools can simulate courtroom scenarios, provide real-time feedback on legal drafts and even conduct mock trials.

LOCAL SOLUTIONS FOR LOCAL PROBLEMS

Bangladesh has the potential to create AI solutions tailored to the specific needs of its legal system. Local tech companies are in a unique position to design tools that understand the context of Bangladeshi law. By investing in these technologies, Bangladesh can develop affordable solutions that will help clear the case backlog. Collaboration between the legal community, tech companies, and institutions like the Supreme Court and Bangladesh Bar Council is crucial for success.

Oleyn, a Bangladeshi-Singaporean tech company, is already developing AI-driven solutions through its product "superattorney.ai". Salman Sayeed, co-founder and CEO of Oleyn, said their innovative platform transforms legal services by scaling up operations at a low cost, addressing the high demand for legal assistance while keeping expenses low for clients and increasing revenue for lawyers.

VOICES FROM THE LEGAL INDUSTRY

Lawyer Raiyan Amin points out that "AI can help automate repetitive tasks such as case management, legal research and data entry," but adds that "AI should just assist and mustn't replace human judgment." Barrister Rafaelur Rahman Mehedi agrees, saying, "AI can help with drafting, legal databases and recording court statements, but trust and confidentiality are crucial in law, and we must be careful with AI's role in this."

CONCLUSION

AI has the potential to bring long-overdue changes to Bangladesh's legal system. By streamlining routine tasks, improving research and supporting lawyers, AI can help clear the backlog of cases and speed up justice. As local companies like Oleyn step up to provide innovative solutions, Bangladesh's legal landscape could soon see a much-needed transformation, ensuring that justice is delivered on time.

The author is the chief of staff of a leading startup and a former president of Junior Chamber International (JCI) Bangladesh​
 

Is AI modern Frankenstein?
SYED FATTAHUL ALIM
Published :
Oct 14, 2024 21:56
Updated :
Oct 14, 2024 21:56

1728953517163.png


A cognitive psychologist and computer scientist, the British-Canadian, Geoffrey E. Hinton who along with the American, John J. Hopfield, was awarded the Nobel Prize in Physics by the Royal Swedish Academy of Sciences on October 8 is himself fearful of the invention that brought him the honour. Upon leaving Google in May 2023 after working there for a decade, he admitted that he left the job to speak freely about the dangers of AI. To him, AI is outpacing human's ability to control it. Consider the frustration of the AI buffs, not less Google, who held him in high regard for his pioneering work on deep learning!The reason the Nobel Committee considered them for the Prize (in Physics) is their use of statistical physics concepts in the development of artificial intelligence. John J. Hopfield, a physicist- turned-chemist-turned-biologist at the California Institute of Technology (Caltech) in 1982 proposed a simple (neural) network on how memories are stored in the brain. He later returned to Princeton as a molecular biologist.That means, neither scientist was a practising physicist when they got the Nobel Prize in Physics. Interestingly, though these two Nobel laureates in Physics got the prize for their seminal works in the advancement of AI, yet both of them expressed concerns about further development of the field they dedicated their career for. However, unlike Geoffrey Hinton, John Hopfield was less dramatic, though no less apprehensive, about expressing his fears about neural network he worked for that mimics the function of the human brain. Maybe, AI does it better than human brain and, what is alarming, even faster! He also warned of potential catastrophes if the advancements in AI research are not properly managed. So, he emphasised the need for deeper understanding of the deep learning systems so the technological development in the field may not go out of control.

The concerns raised by these two lead researchers in AI's advancement, call to mind the Asilomar conference (organised at the Asilomar State Beach in California, USA) of biotechnologists on recombinant DNA molecules in 1975. They discussed potential hazards and the need for regulation of biotechnology. Some 140 biologists, lawyers and physicists participated in the conference and they drew up a set of voluntary guidelines to ensure safety of the recombinant DNA technology, which is about genetic engineering technique that involves combining DNA from different species or creating new genes to alter an organism's genetic makeup.

Geoffrey Hinton in his interview with the website of Nobel Prize stressed thatAI is indeed an existential threat but we still do not know how to tackle it. There are some existential threats like climate change. But not only scientists, the general public also knows that by not burning fossil fuels and cutting down trees, the danger can be averted. That means, humanity knows the answer to the threat posed by the climate change, but it is the greedy businesses and politicians lacking the will who are coming in the way of addressing the threat.

To avert the threat to humanity originating from unregulated AI, mobilising resources by tech companies to conduct research on safety measures is necessary.

Hinton thinks that the linguistics school of Noam Chomsky, for instance, is quite dismissive about AI's capacity for understanding things the way humans do. They (neural networks of AI) cannot process language like humans, the Chomsky School holds.

But Geoffrey Hinton thinks this notion is wrong, since, in his view, neural nets do the job (of processing language) better than what the Chomsky School of Linguistics might imagine.

The harm AI can do is already before all to see. These include AI-generated photos, videos and texts that are flooding the internet. The problem is it is hard to tell the real contents from the fake ones. It can replace jobs, build lethal autonomous weapons by themselves and so on. Here lies the existential threat.​
 

Exploiting the potential of AI integration into Bangladesh university curriculums
Serajul I Bhuiyan
Published :
Nov 11, 2024 22:04
Updated :
Nov 11, 2024 22:04

1731372544672.png


AI has become the number one cause of change in industries all over the world, be it in health, manufacturing, agriculture, or even financial services. Its transformative potential is huge, especially in education, as it empowers personalised learning, data-based insights, and the acquisition of critical skills demanded by the modern workforce. This inclusion of AI-based courses in university curricula is not only innovative but also highly essential for Bangladesh. Moving with the tide of time, Bangladeshi universities, especially private-sector ones, should seize the opportunity and get their students prepared with the competencies they will need in the AI-centric future.

RISE OF THE DEMAND FOR AI SKILLS: A 2023 report by the World Economic Forum emphasised the acute need for digital literacy, machine learning, data analytics, and other skills related to AI as integral to future employability. In Bangladesh, at least, this means industries such as textiles, telecommunication, banking, and logistics have already established their firm plans for integrating AI tools into their work. Bangladeshi universities have started integrating practical learning in AI, such as coding, predictive analytics, and automation tools. With the inclusion of AI across the curriculum, universities can ensure that students develop theoretical knowledge with a mix of practical skills in multidisciplinary areas that render job-ready adaptability in a fast-changing job market.

GOVERNMENT-INDUSTRY SYMBIOSIS: The post-Sheikh Hasina interim government now envisages Smart Bangladesh as a fundamental promise to establish a digitally competitive economy. This is an ambitious plan that epitomises the realisation that integrating artificial intelligence skills into the workforce is no longer an option but a critical driver for future economic growth and innovation. Correspondingly, the Bangladeshi government has also emphasised the development of relevant skills on AI and encouraged active partnerships between industries and academia in keeping with international trends.

This collaboration between the academia and the industry provides students with excellent opportunities to apply AI in practice, through internships, hands-on projects, and so on. The partnerships introduce students to the latest AI technologies firsthand, enabling them to explore various applications in fields such as telecommunications and finance, among others. Drawing inspiration from successful models in India and Singapore—where the integration of AI ranges from research to governance to skill-building—Bangladesh is well-placed to understand how to nurture the AI economy.

CHALLENGES OF INTEGRATING AI IN BANGLADESHI EDUCATION: While promising, the road to AI-integrated education in Bangladesh is beset with daunting challenges.

Infrastructure and skilled faculty shortages: Most universities in Bangladesh, especially the public ones, lack high-performance computing infrastructure, data labs, and software tools which are essential for effective AI education. Furthermore, a severe shortage of AI-trained faculty is also an added concern. To that effect, investment in faculty development, collaboration with globally top institutions, and integration in a knowledge-sharing platform is crucial.

Implementation Cost: There is a huge cost involved in setting up courses on AI, which includes building laboratory facilities as well as buying software licences and training the faculty on a continuous basis. It is expected that universities can have this challenge through public-private partnerships, international funding, and grants so that all institutions could have the required resources for the inclusion of AI in their programmes.

STRATEGIC RECOMMENDATIONS FOR FUTURE-READY AI EDUCATION SYSTEM: The collaboration of universities, industries, and government in tandem holds the key to realising the full potential of AI in education. Here are some strategic recommendations that can help create a more enabling AI integration in Bangladeshi Universities:

AI centers of excellence: Setting up AI-focused research, industry partnerships, and skill development can help build an innovation-oriented culture. For instance, such projects at United International University can set examples by involving students and faculty in policymaking related to advanced AI projects.

Encourage multidisciplinary AI learning: The study of AI should be open to students from all disciplinary backgrounds, including but not limited to business, healthcare, and engineering, so that inter-disciplinarity in AI innovation is facilitated. This will ensure that students of all different fields have a preliminary understanding of where AI is taking their industry.

Leveraging AI-enhanced learning tools Intelligent learning platforms can enable customized learning through knowledge gap identification and adaptivity within learning content. These kinds of adaptive learning tools create an inclusive and effective learning environment, catering to the diverse learning styles and pace of the students.

Offering online and blended learning: Courses and certifications in AI online would increase access to students from all corners of the country, including those living in the most remote areas. This will enable Bangladeshi universities to provide top-class AI content irrespective of geographical barriers.

Organising AI hackathon and competition: Let the hackathons and competitions be the motivator for the students in designing solutions with AI applications aimed at solving real-world problems. This will enable ease of collaboration, creative problem-solving, and entrepreneurial mindset development.

Encouraging international collaboration: An internationally aligned AI curriculum, and collaboration with other institutions abroad, would continue to keep Bangladeshi universities competitive and current with global standards regarding AI education.

AI Education should be included at all levels: Initiative building concepts of AI should be prioritised at the primary and secondary levels to nurture AI literacy from a young age. This long-term approach will help cultivate a generation of tech-savvy individuals prepared to drive Bangladesh’s digital future.

BREAKING ACADEMIC SILOS— UNIVERSITIES AS PIONEERS IN AI INTEGRATION: Bangladeshi universities like North South University, BRAC University, Independent University, Bangladesh, American International University Bangladesh, the Institute of Business Administration at Dhaka University, East-West University, University of Liberal Arts Bangladesh, Daffodil International University, and United International University can be uniquely leading the AI revolution in the country. These institutions have both the resources and academic influence to enable a national movement for AI literacy and skill development. However, one widely held misconception stands in the way of such an outcome: far too many university leaders consider AI a tool relevant only to technical disciplines, and principally within computer science and engineering programmes. This is a very narrow perception, considering that AI’s potential benefits actually span widely, from economics and business to the social sciences. For instance, students of economics might use AI to create predictive market analytics, while students of social sciences might use data to understand behaviors pertinent to a community project. University leaders need to understand that AI is fundamentally multidisciplinary and that its applications go far beyond technical fields: AI tools are transforming the social sciences, the liberal arts, business, health care, and many other areas. For that reason, all academic leaders need to understand how AI applies to each discipline. In this way, universities can hope not only to raise the academic bar but also to prepare the students with the right skills to address prevailing challenges in the real world. Therein lies the opportunity for university leaders to get acquainted with the range of applications that AI serves so that they may strategically consider using it to create curricula positions that will place their institutions on the front row in academic excellence and prepare their students for meaningful contributions to the future of Bangladesh.

With a firm commitment to partnership, cross-disciplinary learning, and strategic investments, universities in Bangladesh could build the bedrock for a resilient and adaptable AI workforce. Standing in line with the likes of Satya Nadella’s vision, where he underlines the harmonious coexistence of humans and technology, Bangladesh has every reason to claim itself as a digital leader in the South Asian region. The journey toward AI literacy—one of transformation—charts the route towards a smart, sustainable future in which citizens of this nation are prepared to leverage the world for opportunities.

Dr. Serajul I. Bhuiyan is professor and former chair of the Department of Journalism and Mass Communications and Georgia Governor’s AI Teaching Fellow, Savannah State University, Savannah, Georgia, USA.​
 

AI beyond engineering
Serajul I Bhuiyan
Published :
Nov 14, 2024 21:08
Updated :
Nov 14, 2024 21:08

1731634757896.png


In Bangladesh and much of South Asia, artificial intelligence (AI) is still commonly perceived as the domain of technical fields like computer science and engineering. However, this view limits the true potential of AI, which has become a transformative force across numerous disciplines, from medicine and business to environmental science, social sciences, and the arts. Recognising AI's broad, interdisciplinary applications, visionary universities worldwide are embedding AI into a diverse array of academic fields. This integration equips students not only with foundational AI knowledge but also with practical experience, preparing them to succeed in an evolving, technology-driven world.

This article highlights AI's applications beyond technical fields, showing how cross-disciplinary AI education prepares students for innovation, complex problem-solving, and adapting to a global workforce. By embracing AI across the curriculum, educational institutions can foster a new generation of professionals ready to lead in an AI-powered future.

AI IN BUSINESS AND MANAGEMENT: In today's globalised economy, AI is reshaping business by enabling data-driven decision-making, improving efficiency, and keeping companies competitive. Recognising AI's pivotal role, leading business schools worldwide are incorporating AI applications-ranging from predictive analytics and consumer behaviour modelling to financial forecasting and process automation-into their curricula. This equips students with essential skills to apply AI in marketing, finance, HR, and logistics, preparing them to thrive in technology-driven business environments. By blending theory with practical AI applications, these programs develop business professionals who can leverage AI for strategic advantage.

Business schools across the US, Europe, Asia, and Australia lead the way in AI education. Harvard Business School uses AI-driven analytics for marketing and strategic decisions, allowing students to analyse consumer behaviour and optimize pricing using real-world case studies. INSEAD, with campuses in France and Singapore, emphasizes AI's role in market analysis, consumer segmentation, and supply chain efficiency, tailoring AI applications to diverse business settings across Europe and Asia. Similarly, the National University of Singapore integrates AI for finance and customer management, preparing graduates for roles in the expanding FinTech sector. Other institutions, like London Business School and the University of Melbourne, offer courses on AI in digital transformation and personalised customer experiences, providing hands-on experience through industry collaborations.

The integration of AI in business education extends to areas like customer relationship management (CRM), supply chain logistics, financial forecasting, and HR. AI-enabled CRM tools let students design personalized customer experiences, essential in e-commerce and retail. AI-driven logistics models help students optimize supply chains, while in finance, students learn predictive modelling and risk analysis. HR applications allow students to use AI to predict workforce trends and enhance talent management. Mastering these applications, graduates from AI-empowered business programs gain a competitive edge, ready to lead AI initiatives aligned with strategic business goals.

AI IN HEALTHCARE AND LIFE SCIENCES: AI applications in healthcare are transformative, enhancing diagnostics, treatment planning, and patient care. Medical programs at institutions like Stanford University and Imperial College London integrate AI, training students in areas like image analysis in radiology, genomic sequencing, and predictive healthcare modelling. AI-based diagnostic tools, for instance, can detect diseases like cancer with remarkable precision. Students trained on such tools develop expertise in AI-driven healthcare, equipping them to improve patient outcomes in efficient, technology-enhanced healthcare systems. This hands-on approach prepares future professionals for tackling complex medical challenges, contributing to global health improvements.

AI IN ENVIRONMENTAL SCIENCE AND SUSTAINABILITY: Environmental science programs are increasingly using AI to address challenges such as climate change, resource management, and conservation. Institutions like ETH Zurich and Australia's University of Queensland incorporate AI into environmental studies, where students learn to monitor ecosystems, forecast weather patterns, and create renewable energy solutions. For example, students may use AI models to analyse satellite images for tracking deforestation, helping governments and NGOs respond to environmental degradation in real-time. This training enables graduates to apply data-driven insights toward sustainable environmental solutions, making meaningful contributions to global issues.

AI IN ENGINEERING AND MANUFACTURING: In fields like manufacturing, civil infrastructure, and robotics, AI is driving innovation and efficiency. Institutions such as MIT and the Technical University of Munich incorporate AI into their engineering programs, allowing students to simulate processes, optimize structures, and develop automated systems. In manufacturing, AI applications include predictive maintenance and logistics optimization, enhancing production efficiency and quality control. Robotics students, for example, design AI-powered robots for complex tasks like assembly or hazardous material handling. These real-world AI applications provide students with the practical skills needed to lead advancements in automation and smart manufacturing.

AI IN SOCIAL SCIENCES AND HUMANITIES: AI is also making an impact in social sciences and humanities, fields that were traditionally data-limited but are now rich with insights through AI analytics. Universities like Oxford and the University of Toronto use AI tools in sociology, psychology, and political science to study social patterns, behavioural trends, and policy impacts. Political science students can use predictive analytics to model election outcomes or assess public opinion shifts, while psychology students may apply AI-driven sentiment analysis to evaluate mental health trends on social media, identifying community needs. This interdisciplinary approach allows students to ground theoretical knowledge in empirical data, fostering a nuanced understanding of societal issues and enabling evidence-based solutions.

AI IN LAW AND PUBLIC POLICY: AI is also transforming law and public policy, fields that were traditionally considered resistant to technology. Institutions like the National University of Singapore and Stanford Law School now offer courses on AI applications in legal research, contract analysis, and case prediction. Students learn how AI can automate document reviews, identify precedents, and predict case outcomes based on past rulings. In public policy, AI enables the analysis of healthcare access, crime rates, and education quality, providing empirical evidence for data-driven policy decisions. This integration equips future professionals to address the ethical and regulatory challenges of AI, preparing them to create informed legislation that balances innovation with responsible AI use.

By embracing AI across disciplines, universities worldwide are preparing a generation of graduates ready to lead in an AI-powered world.

AI IN ARTS AND DESIGN: AI introduces an exciting new dimension to arts and design, opening frontiers for creativity and innovation. Programs at institutions like the Royal College of Art in the UK and Parsons School of Design in the US are incorporating AI into courses that explore generative design, digital art, and music composition. Through AI, students create immersive virtual reality experiences, develop algorithmic art, and even compose music using machine learning models. In media arts, AI enhances special effects, personalizes viewer experiences, and optimizes content recommendations. By merging AI with creative disciplines, universities empower students to push the boundaries of traditional art forms, creating a unique blend of technology and artistic expression that redefines the possibilities in arts.

AI IN JOURNALISM AND MASS COMMUNICATIONS: AI is redefining journalism and mass communications, reshaping how news is gathered, produced, and delivered in an age of rapid digital transformation. Top media organisations and journalism schools globally are adopting AI-powered tools to create a more agile, responsive, and insightful media landscape. AI tools analyze vast datasets, automate content generation, personalize news delivery, and detect misinformation, enabling journalists to produce content more quickly, accurately, and with greater impact. From automating routine news to enhancing fact-checking, AI allows newsrooms to cover stories efficiently, freeing journalists to delve into complex investigative work. Journalism students, in turn, learn to use AI technologies to meet emerging challenges and capitalise on AI-driven opportunities in media

In practice, AI advances news automation, personalised content, investigative data analysis, and multimedia storytelling. Organisations like The Associated Press and Reuters employ AI algorithms to automate routine reports on financials, sports, and weather, allowing journalists to focus on in-depth stories. AI personalises content to readers' preferences, as seen with The New York Times and BBC, which recommend articles based on reader interests, enhancing engagement. AI-powered visual tools also streamline video editing, create immersive AR and VR content, and enable social listening on platforms like Crimson Hexagon, tracking public sentiment on key issues. Predictive analytics helps outlets like The Guardian forecast trends, while translation tools like Google Translate broaden reach globally. Journalism programs now integrate training in these applications, preparing students for a field where tech-savvy complements traditional reporting skills. As students consider the ethical aspects of AI-addressing privacy, bias, and responsible use-they are positioned to lead with integrity in a fast-evolving digital world.

TRANSFORMATIVE IMPACT: By embedding AI across disciplines, universities create a space where students experience AI's transformative potential in varied contexts. This cross-functional approach produces graduates who are not only experts in their fields but also proficient in AI, ready to work in interdisciplinary teams and adapt to fast-changing, tech-driven environments. Real-world projects, internships, and AI-driven lab work help students to bridge academic knowledge with industry practice, equipping them to contribute meaningfully from day one in the workforce. This blend of theory and practice prepares students to drive change and innovation across industries.

BUILDING A RESILIENT WORKFORCE FOR AN AI-DRIVEN WORLD: In a global economy that values digital proficiency and technological expertise, universities that embrace AI across disciplines position their graduates for lasting success. Integrating AI into diverse fields ensures that students gain foundational knowledge while also learning to apply AI in industry-relevant ways. This mix of theory and hands-on experience fosters agile, innovative graduates equipped to navigate the complexities of an AI-powered future. As AI advances, this approach cultivates a resilient workforce capable of adapting and harnessing AI responsibly to address real-world challenges and contribute to societal progress.

Dr Serajul I Bhuiyan is a professor and former chair of the Department of Journalism and Mass Communications at Savannah State University, Savannah, Georgia. USA; and a Georgia Governor's AI Teaching Fellow at Louise

McBee Institute of Higher Education, University of Georgia, Athens, USA.​
 

Is AI’s meteoric rise beginning to slow?

1731976074334.png

OpenAI CEO Sam Altman (L) shakes hands with Microsoft Chief Technology Officer and Executive VP of Artificial Intelligence Kevin Scott during an event in Seattle. OpenAI recently raised $6.6 billion to fund further advances. Photo: AFP

A quietly growing belief in Silicon Valley could have immense implications: the breakthroughs from large AI models -– the ones expected to bring human-level artificial intelligence in the near future –- may be slowing down.

Since the frenzied launch of ChatGPT two years ago, AI believers have maintained that improvements in generative AI would accelerate exponentially as tech giants kept adding fuel to the fire in the form of data for training and computing muscle.

The reasoning was that delivering on the technology's promise was simply a matter of resources –- pour in enough computing power and data, and artificial general intelligence (AGI) would emerge, capable of matching or exceeding human-level performance.

Progress was advancing at such a rapid pace that leading industry figures, including Elon Musk, called for a moratorium on AI research.

Despite the massive investments in AI, performance improvements are showing signs of plateauing.

Yet the major tech companies, including Musk's own, pressed forward, spending tens of billions of dollars to avoid falling behind.

OpenAI, ChatGPT's Microsoft-backed creator, recently raised $6.6 billion to fund further advances.

xAI, Musk's AI company, is in the process of raising $6 billion, according to CNBC, to buy 100,000 Nvidia chips, the cutting-edge electronic components that power the big models.

However, there appears to be problems on the road to AGI.

Industry insiders are beginning to acknowledge that large language models (LLMs) aren't scaling endlessly higher at breakneck speed when pumped with more power and data.

Despite the massive investments, performance improvements are showing signs of plateauing.

"Sky-high valuations of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence," said AI expert and frequent critic Gary Marcus. "As I have always warned, that's just a fantasy."

One fundamental challenge is the finite amount of language-based data available for AI training.

According to Scott Stevenson, CEO of AI legal tasks firm Spellbook, who works with OpenAI and other providers, relying on language data alone for scaling is destined to hit a wall.

"Some of the labs out there were way too focused on just feeding in more language, thinking it's just going to keep getting smarter," Stevenson explained.

Sasha Luccioni, researcher and AI lead at startup Hugging Face, argues a stall in progress was predictable given companies' focus on size rather than purpose in model development.

"The pursuit of AGI has always been unrealistic, and the 'bigger is better' approach to AI was bound to hit a limit eventually -- and I think this is what we're seeing here," she told AFP.

The AI industry contests these interpretations, maintaining that progress toward human-level AI is unpredictable.

"There is no wall," OpenAI CEO Sam Altman posted Thursday on X, without elaboration.

Anthropic's CEO Dario Amodei, whose company develops the Claude chatbot in partnership with Amazon, remains bullish: "If you just eyeball the rate at which these capabilities are increasing, it does make you think that we'll get there by 2026 or 2027."

Nevertheless, OpenAI has delayed the release of the awaited successor to GPT-4, the model that powers ChatGPT, because its increase in capability is below expectations, according to sources quoted by The Information.

Now, the company is focusing on using its existing capabilities more efficiently.

This shift in strategy is reflected in their recent o1 model, designed to provide more accurate answers through improved reasoning rather than increased training data.

Stevenson said an OpenAI shift to teaching its model to "spend more time thinking rather than responding" has led to "radical improvements".

He likened the AI advent to the discovery of fire. Rather than tossing on more fuel in the form of data and computer power, it is time to harness the breakthrough for specific tasks.

Stanford University professor Walter De Brouwer likens advanced LLMs to students transitioning from high school to university: "The AI baby was a chatbot which did a lot of improv'" and was prone to mistakes, he noted.

"The homo sapiens approach of thinking before leaping is coming," he added.​
 

Member Search / Jot Notes

Back