The Foundations of Artificial Intelligence: From Symbolic AI to Neural Networks

Artificial intelligence (AI) has emerged as one of the most transformative technologies of the 21st century, rapidly permeating diverse sectors such as healthcare, finance, transportation, and law enforcement. Its ability to process vast volumes of data with unprecedented speed and accuracy offers the potential to revolutionise industries, streamline processes, and improve decision-making. From diagnosing medical conditions with remarkable precision to enabling autonomous vehicles to navigate complex urban environments, AI promises far-reaching benefits.

However, as AI systems become increasingly integrated into critical decision-making processes, the question of accountability for erroneous or harmful outcomes becomes both urgent and complex. High-profile incidents, ranging from self-driving car accidents to biased hiring algorithms and AI-driven medical misdiagnoses, have underscored the real-world consequences of AI errors. These incidents raise pressing questions: when AI causes harm, who should bear the responsibility, the developer who designed it, the organisation that deployed it, or the AI system itself?

The challenge is compounded by AI’s “black box” nature, where the reasoning behind decisions is often opaque, and by the global nature of AI deployment, which complicates jurisdiction and legal harmonisation. Existing liability frameworks, designed for human decision-makers and conventional products, often struggle to accommodate AI’s autonomous and evolving characteristics. Without robust legal and ethical standards, the use of AI risks perpetuating bias, infringing privacy, and undermining public trust.

Defining AI accountability

Defining AI accountability

AI accountability refers to the systems, policies, and practices that ensure all stakeholders, developers, deployers, and end users, are held responsible for the outcomes of AI-driven decisions. The rapid growth of artificial intelligence has been driven by several converging factors. The explosion of data generated from social media platforms, sensors, connected devices, and other digital sources has provided AI systems with the vast datasets needed to improve their performance. At the same time, advances in computing technologies, such as high-performance graphics processing units (GPUs) and scalable cloud infrastructure, have made it possible to train and deploy complex AI models that were unimaginable just a few years ago. These developments have accelerated AI adoption across sectors, enabling automation of time-intensive tasks, optimization of decision-making processes, and the delivery of unprecedented efficiencies.

As AI becomes embedded in critical functions, the question of accountability takes on central importance. It extends beyond assigning blame, encompassing ethical obligations to design, develop, and deploy AI in ways that are fair, transparent, and aligned with societal values. Without clear accountability, trust in AI erodes, and the risks of bias, privacy violations, and harmful errors increase.

The challenge lies in the complexity and opacity of many AI systems. Machine learning models can evolve over time, making their decision-making processes difficult to trace, while “black box” architectures limit even their creators’ ability to explain specific outputs. Moreover, the involvement of multiple actors, ranging from engineers and product managers to executives and regulators, blurs the lines of responsibility. Different jurisdictions are beginning to respond with regulatory frameworks such as the EU AI Act, which introduces strict and fault-based liability models, but harmonizing such standards globally remains a work in progress.

Challenges in designating accountability

Challenges in designating accountability

Challenges in assigning accountability for AI-related harm arise from several interlinked factors. Existing legal frameworks were not designed with AI in mind, making it difficult to determine whether responsibility should fall on developers, deployers, or end users in cases such as autonomous vehicle accidents. The complexity of AI systems, which often involve multiple stakeholders, from developers and data providers to organizations and end-users, further blurs the lines of responsibility. This difficulty is compounded by AI’s capacity for autonomous decision-making without human intervention, which complicates the attribution of liability. Additionally, bias or errors in training data can shift accountability toward data providers rather than developers, as illustrated by a 2020 case in which a major U.S. healthcare algorithm was found to discriminate against black patients due to biased data. Finally, the “black box” nature of many AI models limits transparency, making it challenging to trace the source of errors and effectively assign responsibility.

Who is responsible for AI?


Determining who bears responsibility when an AI system makes an error, causes harm, or produces misleading information is a complex question with no single, universal answer. The outcome often depends on the specific context in which the AI is deployed, the nature of the system itself, and the applicable jurisdiction’s legal framework. Nevertheless, several key stakeholders consistently emerge as central to AI accountability.

  • Developers: Developers are responsible for designing, training, and testing AI systems to ensure accuracy, reliability, and fairness. This includes addressing bias in training data, anticipating potential misuse, and incorporating safeguards to prevent harmful outcomes. If an AI’s failure can be traced to flaws in its design or training, developers may be held liable. For example, if a facial recognition tool disproportionately misidentifies individuals from certain demographic groups due to inadequate dataset diversity, the responsibility lies with those who created and trained the system.
  • Organisations: Organisations that deploy AI systems bear significant responsibility for ensuring they are used ethically, legally, and as intended. This includes continuously monitoring system performance, detecting signs of bias or error, and promptly addressing any identified issues. Companies must also ensure compliance with relevant legal and ethical standards. If a business knowingly deploys a flawed AI tool or ignores foreseeable risks, it may be held accountable for resulting harm.
  • Data Providers: The quality, fairness, and representativeness of the data used to train AI systems directly affect performance. Data providers play a crucial role in ensuring datasets are free from systemic biases and inaccuracies. Poor-quality data can lead to flawed decision-making, making data providers a key link in the accountability chain.
  • End Users: While AI is often autonomous, human oversight remains essential. Users who intentionally misuse AI or operate it outside its intended purpose may be held responsible for resulting harm. For instance, if an individual knowingly uses an AI system to spread misinformation or commit fraud, liability rests with the user.
  • Regulators: Regulatory bodies are instrumental in setting standards, creating oversight mechanisms, and enforcing accountability in AI deployment. They establish frameworks that prioritise safety, transparency, and fairness, such as the European Union’s proposed AI Act, which introduces strict and fault-based liability regimes. Regulators also monitor compliance, mandate reporting of AI-related incidents, and ensure organisations adhere to established ethical and legal guidelines.

Examples of AI Accountability

Examples of AI Accountability

Real-world cases of AI accountability highlight how different industries are addressing the ethical, legal, and operational challenges of artificial intelligence while striving for transparency and fairness. These examples demonstrate both the potential benefits of AI and the need for robust oversight.

  • Healthcare: In healthcare, AI diagnostic tools are improving disease detection accuracy and reducing biases. For instance, the TREWS AI system developed at Johns Hopkins identifies early signs of sepsis, detecting 82% of cases with nearly 40% accuracy. Patients using TREWS are 20% less likely to die due to earlier interventions, underscoring AI’s life-saving potential when implemented responsibly. AI is also being used to create personalised cancer treatments tailored to an individual’s genetic profile, further enhancing precision medicine.
  • Finance: In the financial sector, AI automates loan approvals by analysing borrower data to determine eligibility and loan amounts. These systems aim to improve fairness by assessing creditworthiness without human bias, promoting financial inclusivity. AI also plays a critical role in detecting fraudulent activities during credit assessments, helping secure transactions and reduce risk. Beyond lending, AI powers robo-advisors, algorithmic trading platforms, and advanced risk management tools.
  • Transportation: AI is revolutionising transportation through self-driving cars and trucks, reinforcement learning algorithms that teach vehicles to navigate safely, and systems that improve traffic flow or optimise public transit. Reinforcement learning rewards safe manoeuvres and penalises mistakes, allowing autonomous vehicles to improve over time while adhering to safety standards.
  • Education: In education, AI enables personalised learning platforms, automated grading, and real-time feedback for students. These systems adapt content to each learner’s needs, helping educators identify gaps and improve instruction quality. AI-powered essay grading and feedback tools also provide consistent evaluations and help streamline teacher workloads.
  • Customer Service: AI-driven chatbots and virtual assistants now provide 24/7 customer support, answering queries, resolving issues, and analysing customer data to identify trends. This enables companies to respond more quickly to customer needs while improving service quality.

Examples of AI Accountability


Accountability in AI is a cornerstone for safe, ethical, and trustworthy technology deployment. It requires clear definitions of responsibility, from developers who design the algorithms to organizations that implement them, regulators who set and enforce standards, and users who engage with these systems. By embracing transparency, rigorous testing, and global best practices, stakeholders can collectively mitigate risks and ensure AI benefits society. As AI capabilities grow, maintaining accountability will be essential to fostering public trust and preventing harm, ensuring that these powerful systems serve humanity with fairness, responsibility, and integrity.

We at DataSecure (Data Privacy Automation Solution) can help you to understand Privacy and Trust while lawfully processing the personal data and provide Privacy Training and Awareness sessions in order to increase the privacy quotient of the organisation.

We can design and implement RoPA, DPIA and PIA assessments for meeting compliance and mitigating risks as per the requirement of legal and regulatory frameworks on privacy regulations across the globe especially conforming to GDPR, UK DPA 2018, CCPA, India Digital Personal Data Protection Act 2023. For more details, kindly visit DPO India – Your Outsourced DPO Partner in 2025.

For any demo/presentation of solutions on Data Privacy and Privacy Management as per EU GDPR, CCPA, CPRA or India Digital Personal Data Protection Act 2023 and Secure Email transmission, kindly write to us at info@datasecure.ind.in or dpo@dpo-india.com.

For downloading various Global Privacy Laws kindly visit the Resources page in Resources.

We serve as a comprehensive resource on the Digital Personal Data Protection Act, 2023 (DPDP Act), India's landmark legislation on digital personal data protection. It provides access to the full text of the Act, the Draft DPDP Rules 2025, and detailed breakdowns of each chapter, covering topics such as data fiduciary obligations, rights of data principals, and the establishment of the Data Protection Board of India. For more details, kindly visit DPDP Act 2023 – Digital Personal Data Protection Act 2023 & Draft DPDP Rules 2025

We provide in-depth solutions and content on AI Risk Assessment and compliance, privacy regulations, and emerging industry trends. Our goal is to establish a credible platform that keeps businesses and professionals informed while also paving the way for future services in AI and privacy assessments. To Know More, Kindly Visit – AI Nexus Your Trusted Partner in AI Risk Assessment and Privacy Compliance | AI-Nexus