Other links:

Other links:

Byte-Sized Economics: Navigating My Nerdy Niche at CEDA

Armed with insights and backed by the support of professors and the CEDA team, I delved deep into optimising my approach, employing parallel processing, efficient data handling, better compression techniques and picking up cloud computing on the go, writes Tanish Bafna

When I first arrived at Ashoka, I often found myself struck by the passion and clarity with which my peers explained the brilliant intersectionality of their chosen disciplines. While I, wide-eyed and curious, scrambled to figure out the perfect partner for economics. To my disappointment, even when a spontaneous tryst with computer science brought forth the tech-savvy side of me, the separation between the two fields seemed palpable. It was not until I joined CEDA that my interests collided and a synergy emerged, opening up a world of interdisciplinary opportunities.

Looking back a bit further, it all began with a rather mundane annoyance: the lack of notifications regarding course grades on AMS. I found myself incessantly refreshing the university’s portal, in a cycle of anticipation and frustration. This led to my first experiment with web scraping, crafting a tool to notify me of grade updates. Little did I know, this small venture was setting the stage for my rendezvous with CEDA. As opportunities arose, I noticed many of my peers gravitating towards the more conventional economics research roles, while the technical post at CEDA beckoned me. The potential of scraping invaluable but inaccessible data and pioneering new datasets for research resonated with both the economist and coder in me. The task was challenging, but the overarching mission was compelling enough: simplify complex data to remove barriers to innovative research.

The road, however, was far from smooth. My early elation with a five-day success on the Department of Consumer Affairs (DoCA) scraping was soon met with the behemoth challenge of the Agmarknet website. Juggling an archaic interface with golden data, this mammoth task required more than just traditional methods. Challenges arose faster than solutions: my personal computer’s limitations, the vastness of the data, and the need for optimisation across every dimension — time, storage, and network. Every hurdle was a lesson, a notch in the long journey of becoming a better problem-solver. Luckily, in this ecosystem, the learning curve was not a solo expedition. CEDA believed in guidance without hand-holding. While the team boasted seasoned coders familiar with many of the challenges I encountered, they only nudged me towards possible solutions rather than providing direct answers. Armed with insights and backed by the support of professors and the CEDA team, I delved deep into optimising my approach, employing parallel processing, efficient data handling, better compression techniques and picking up cloud computing on the go.

Automation, however, was my final frontier. Even as my scrapers became more sophisticated, they still clung to human intervention, because with scraping came a myriad of potential errors, each demanding a different and customised correction. Moreover, if the open-source vision of a consistently updated data tool was to be achieved, the daily and monthly ritual of setting up a coding environment and pressing “run” had to evolve.

Refactoring my code to be self-sufficient, while daunting at first, metamorphosed into an exercise of innovative problem-solving. I realised that the past few months of slowly chipping away at these websites’ antiquated architecture had provided me with a blueprint of their fault lines. Knowing these, my refactoring task introduced the necessary failsafe to minimise oversight. By embracing Docker and AWS pipelines, I tried to further automate the scrapers to ensure my work continued to deliver on its research promises rather than bugging out and burdening future developers. This was when I realised that CEDA had instilled in me a philosophy I would carry with me henceforth: strive not just for a solution that works, but one that works the best.

Now, while the tech challenges were exhilarating, my economist side yearned for more. Under Professor Kanika Mahajan’s guidance, I ventured into constructing inflation indices that could offer an invaluable gauge for food prices. These weren’t just month-to-month snapshots, but a daily deep dive, empowering real-time insights for market interventions. However, my true moment of relief and satisfaction didn’t come with the tool’s launch. Instead, it arrived months later when a colleague inquired about accessing the index for their course’s final paper!

Working at CEDA, also reshaped my approach to collaboration. The backend experts preached optimisation. The frontend team explained the nuances of visualisation. The professors and the economists? They embedded the ethos of making data truly useful. I wasn’t a full-stack developer or had any experience with an end-to-end economics project, but within CEDA’s machinery, I was a small cog, ensuring the smooth flow from raw data to insightful analyses.

Today, as I reflect on my time at CEDA, I see it as more than just an internship. As corny as it sounds, it really was a transformative journey through which I fused my passion for economics and technology. I left there with skills that permanently altered my perspective, making me see untapped potential in every dataset. Whether it’s unstructured data or inconsistent values, I now approach each with a keen eye, seeking narratives and correlations along every data point. In a way, CEDA has taught me to be both an economist and a coder, without compromising on either.


(Tanish Bafna, an ASP’23 was an intern with Centre for Economic Data and Analysis(CEDA) for Summer 2021)

Study at Ashoka

Study at Ashoka