Other links:

Other links:

Courses

The New Geography 

Overview

This course focuses on socio-technical problems caused by humanity blindly stumbling its way into the Information Age. Our new world has new rules: intellectual property looks different, cyber-crime looms large, cold cyber-warfare persists at a nation-state level, planet-scale surveillance is commonplace, we’re all about to lose our jobs to robots, and the list goes on.

We shall study the rise of fake news and nation-state propaganda, the nature of sensitive information and the importance of privacy, and the deeper structural issues (such as the nature of the internet, the laws of scale, and the direction of technological progress, especially in AI) that underlie many of our problems.

While this course shall be interesting for (computer science) experts and non-experts alike, we’ll hold extra sessions for non-expert students: you should be willing to get their hands dirty! Students will also be expected to do some background reading on the history of the internet, cyber-crime, etc.

The syllabus is broken into five theses:

  • Resources: Humans, Money, and Material (includes data)
  • Control: Propaganda, Surveillance, Crimes, War
  • Flow: Structure of the Internet and Sensitive Information
  • Delusion: Understanding the internet’s effects on cultural heritage and identity.
  • Robots: How large scale data analytics and ML affects all of the above

Learning Outcomes

You should have a more nuanced, wider, and deeper understanding of the interplay between computing and human society and its many facets, from the economics and supply chains involved, to elections and propaganda, to individual-level behavioral changes.

Requirements (Reading List and other materials)

  • Bernays; Propaganda
  • Shambaugh; China’s Propaganda System: Institutions, Processes and Efficacy
  • Zuboff; The Age of Surveillance Capitalism
  • Levine; Surveillance Valley
  • Graeber; Bullshit Jobs
  • Gray; Ghost Work
  • Work; US War Doctrine in the Robotic Age
  • OpenAI Team; GPT-3 Documentation
  • Johns; Piracy
  • Shaxson; Treasure Islands

An Overview of usable privacy and security 

Overview

The scale and scope of digitisation and use of AI in public life in India are unmatched in the world, especially in large public service applications.

While the potential of such digitisation and AI applications for public good is unquestionable, they do raise some serious  social, ethical and legal questions  [Zuboff 2018, Eubanks 2018, Harari 2020]  whose mitigation and compliance with regulation pose new challenges for computer science. In fact, such systems have been difficult to operationalise anywhere in the world. Many attempts at building large public service s like national identity systems [The LSE report 2005],  health registries [Temperton 2016, Charette 2018, Shrikanth and Parkin 2019, Hecht 2019], national population and voter registries [Zetter 2018, Tripathi 2019, Pal 2018], public credit registries [epic.org 2019, Chugh and Raghavan 2019],  income  [Uutiset, 2018] and tax registries [Houser and Sanders, 2017] etc. have often been questioned on fairness, privacy and other ethical grounds.  The concerns  have invariably been related to the need for protective safeguards when large  data integration projects are contemplated, and acknowledgment of the bias, exclusion, discrimination,  privacy and security problems that these could create. In some situations they have even had to be abandoned altogether  as they were unable to deal with these risks [Temperton 2016, Charette 2018, GOV.UK 2011]. In fact, there are very few examples of successful systems free of controversies. In India too, the use of our national digital identity programme was restricted only to welfare disbursal by the majority judgement on Aadhaar; and the minority judgement  found it to be unconstitutional in its entirety [Puttaswamy 2018]. Very similar considerations have led to the judicial halting of the biometric-based national identity programmes in Jamaica [Kini 2019] and Kenya [England 2020].

The concerns around digitisation and privacy [Bansal 2021, Puttaswamy 2017, Puttaswamy 2018] not only require new computer science techniques for analysis, design  and implementation, but als o require dimensions beyond Computer Science [National Academies US 2022].  In fact, though there have been ongoing debates around the data protection bill for several years now [Planning Commission 2011, Srikrishna et al. 2017], its scope and structure are still open questions, and it is yet again withdrawn from the parliament essentially due to a lack of agreement about its final shape.

In this course we will first try to understand the nuances of privacy and other ethical concerns related to such applications from legal and ethical perspectives. We will then consider issues in developing precise and actionable operational models for these concerns. Finally we will discuss possible methods for addressing and mitigating these concerns, and review the effectiveness of standard tools in CS used to address privacy and related issues including encryption, anonymisation, zero-knowledge proofs, secure multi-party computations, hardware enclaves and trusted execution environments, differential privacy, fairness definitions in AI and doctrines of discrimination.

The course evaluations will be based on discussions, reading and presentations.

Learning Outcomes

Introduction to issues in digitisation of large public-service application

Requirements (Reading List and other materials)

References:


Digitalization and Privacy

Overview

India is arguably the biggest deployer of Digital Public Goods (DPG, digital systems  in public life) with large public service applications  (in-use or contemplated) like national identity, phone-based payment systems, electronic voting, national-level health registry, national population and voter registries, public credit registry, income and other tax registries, face recognition based access control at airports and other facilities, bluetooth based contact tracing and a national intelligence grid.  It is undeniable that the DPGs have had a huge impact on public life in the last decade.

However, these systems also come with risks of exclusion and increased cost of transactions, and increased risks of privacy violations, especially for a population in which digital literacy is low. The privacy judgement of the Supreme Court of India read all such risks into the Articles 14, 19 and 21 of the Indian constitution and broadly classified them as `privacy’. However, the technical and operational standards for such privacy protection are not yet well developed. This has led to a constant tension between the state and the civil society and privacy activists resulting in several constitutional cases in the Supreme court and various High courts. The possibilities of inferential privacy and other human rights violations with modern machine learning — whether deliberate or inadvertent —  or unfair and discriminatory processing of data, compound the problem.

In this course we will unpack the privacy and other human rights requirements in such applications from both legal and technical points of view. We will investigate the possibilities of early alignment of the two and  examine if it is possible to outline the necessary and sufficient conditions for privacy protection, as envisaged by the privacy judgement of the Supreme court of India. We will  review the privacy enhancement techniques in computer science, ranging from encryption and applied cryptography, electronic voting, database and network security, trusted execution environments, blockchains, anonymization and other data minimisation techniques, and evaluate their suitability and efficacy for privacy protection. In the final part of the course we will investigate the architectural possibilities for privacy protection – from both  legal and technical perspectives – that may help not only in design but also in assessing vulnerabilities and omissions.

The evaluations in this course will be based on scribing, reading and presentations, small implementations and a project cum term paper.

Learning Outcomes

In this course we will unpack the privacy and other human rights requirements in such applications from both legal and technical points of view. We will investigate the possibilities of early alignment of the two and  examine if it is possible to outline the necessary and sufficient conditions for privacy protection, as envisaged by the privacy judgement of the Supreme court of India. We will  review the privacy enhancement techniques in computer science, ranging from encryption and applied cryptography, electronic voting, database and network security, trusted execution environments, blockchains, anonymization and other data minimisation techniques, and evaluate their suitability and efficacy for privacy protection. In the final part of the course we will investigate the architectural possibilities for privacy protection – from both  legal and technical perspectives – that may help not only in design but also in assessing vulnerabilities and omissions.

Requirements (Reading List and other materials)

The reading material in this course will be based on handouts, articles and research papers, Supreme Court judgements, Basic cryptography and system security texts.

CS training is not explicitly required for this course. However, familiarity with computational thinking will be very helpful.

Study at Ashoka

Study at Ashoka