Category Archives: Technology & Society

Open Data: Looking Beyond the Apps

The open data movement has surely gathered momentum across the world. Taking a cue from the United States, which launched its data.gov open data sharing site in May 2009, many national governments have taken similar initiatives to create their own open data sites. These include developed countries such as Australia, Canada, France, Germany, Italy, Netherlands and UK as well as emerging nations such as Brazil, India, Indonesia and Russia. Most of these sites got launched between 2010-2012.

India launched its open data site, data.gov.in September 2012. After starting off slow, it has now picked up momentum and today offers more than 2500 datasets. A dataset is a table of data on a particular area. It could be as large as all the crop production in the country crop-wise, district-wise for the last 30 years or it could be as narrow as exports of a particular item to different global regions in a single year.

The US opened the government data as part of president Obama’s open governance promise, while the first Federal CIO Vivek Kundra, the person behind implementing the initiative, called upon individuals, groups and commercial companies to make use of open data to build innovative apps that would solve citizens’ problems. Kundra consistently championed building apps and even prophesized that in the coming years, there would be “explosion of apps” based on open data.

Since then, these two attributes—transparency and citizen apps—have become the de facto objectives of government open data initiatives across the world. While the developed world has taken to both these objectives, the emerging countries have focused more on the citizen app side, for obvious reasons. Transparency is a very lofty objective to achieve in these countries just by releasing some datasets, when other governance frameworks are not ready.

While both these are worthy expectations to have from government open data initiatives, what is a little worrying is that these objectives have come to define open data priorities and policies in many countries.

Take the Apps expectation, for example. Globally, the role of apps creation from open data has been so overemphasized that many governments try to measure the effectiveness of their open data programs by the number of apps developed on the data made available. That is a completely misplaced expectation because of two reasons. One, data can help in betterment of citizen’s life in many ways beyond apps. Two, it is difficult for governments to track all the apps created. Look at the US data.gov site. Though there are more than 75,000 datasets, there are only around 350 citizen developed apps shared in the site.

Apart from misplaced expectations (and disappointments because of not meeting those expectations), the apps expectation has also resulted in misplaced priorities and policies governing open data.

Here are some of the skewed policies governments have followed because of the overemphasis on the apps part of open data.

Not measuring the efficiency accrued to the economy. Open data initiatives throw important government information in public domain, accessible easily to all. Very often, similar information is separately collected by various others (academic researchers, commercial organizations, other government bodies and agencies) for their requirements, thus duplicating the efforts. In other words, it is inefficient use of time and resources.

Open data, by eliminating—or at least minimizing—the need to duplicate that effort makes the whole economy far more efficient. This is difficult to measure in the short run but over a period of time can be measured. I have never heard any open data evangelist talking about this anywhere.

Further, if the governments realize this, they could cooperate with the other stakeholders and data collection and processing can be optimized to meet the requirements of more stakeholders. In future, the cost can even be shared. This can lead to far more efficient collection and processing of basic information and even enhance data quality.

Limited Outreach. The overemphasis on apps aspect has created a misplaced priority in terms of outreach. The outreach programs of governments in most countries are directed at the tech/app builder community with some tech savvy NGOs/advocacy groups joining in. The entire open data discussion is restricted to these three communities: government, developers, NGOs/advocacy groups. Many major stakeholders such as media, market researchers and academic researchers who could play an important role in showing the latent value that lies in open data are today left out. Even if they do show an interest, they often get scared away by the technical lingo that dominates these discussions. That is a loss for the cause of open data.

In an online conversation hosted by The World Bank on Open Data for Poverty Alleviation, I raised this point. Tim Davies of Practical Participation did agree and had this to say.

I think there is often a failure in open data capacity building to think about the consultants, analysts, researchers and so-on who might be engaged as users of data, and who will provide bespoke value added services on top of it (hopefully realizing social as well as economic value).

Restrictive data formats. Many government agencies implementing open data in their countries focus all their attention on obtaining/creating datasets in machine readable format—a direct result of working from apps backwards. While a lot of time and energy is wasted in conversion/cleaning, a lot of good, structured datasets, that are not in machine readable format never make it to their list of published datasets. That is a big loss.

True, machine readable formats do make life easier for everyone, but ignoring human readable formats is the other extreme. Open data is not defined by any format. Maybe, the implementers of data portals should take some middle path, which will encourage machine readable formats but should not leave out human readable formats such as pdf completely.

Too much emphasis on datasets on consumer interest areas. The overemphasis on citizen apps put an undue pressure on the managers of data portals to work towards obtaining more and more datasets that are directly of interest to end consumers and hence good data to build apps on. So, while a hospital list or a crime info dataset is cheered, a crop production data or exports data is often dismissed as “useless information dumped by government.” While it’s true that data that is of consumer interest can be used to instantly create apps, research on data on agriculture and meteorology, when analyzed at the hands of experts and using right tools can have a far broader and long term impact on the lives of millions of citizens. These analyses could help in maximizing agricultural production/avoiding big disasters/imparting the right skills to unemployed youth and so on, even if they are not created as sleek apps.

Slowly but surely, the constraints of associating open data too much with apps and pre-designed visualizations are being realized. Mike Gurstein, a leading voice about open data argued this in his blog.

But why shouldn’t we think of “open data” as a “service” where the open data rather than being characterized by its “thingness” or its unchangeable quality as a “product”, can be understood as an on-going interactive and iterative process of co-creation between the data supplier and the end-user; where the outcome is as much determined by the needs and interests of the user as by the resources and pre-existing expectations of the data provider?

Though Gurstein’s explicit question is about the rationality of deciding outcomes by the pre-existing expectation of the data provider, the logic can be extended to ask why should it be based on the pre-existing expectation of the apps providers? In most cases, the apps providers do not have too much of extra insight about the end users’ needs.

At the end, it must be pointed out that open data is about making information work for the betterment of society—making lives of citizens convenient, creating the basis for decisions at a macro-economic level, making the economy and business ecosystem more efficient, and yes, minimizing risk. It is not about technology; technology is a very handy tool, though.

Leave a comment

Filed under Open Data, Policy & Regulation, Technology & Society

The Realities of Twitter Democracy!

In September 2011, as the then editor of Dataquest, I wrote an editorial, The Opportunities and Threats of Facebook Democracy. While Dataquest was one of the first publications to do a cover story on how social media was effectively used in the fight against corruption earlier that year (in April), and had celebrated the new power that social media had given to the common people, in this editorial, I had warned against attaching too much importance to the voice emanating from social media by the leaders and policy makers. My reason, of course, was the low penetration of social media. In a large diverse democracy, jumping into conclusion based on what a small section of people belonging to a particular socio-economic section say, was a potentially dangerous and suicidal thing to do, I argued.

The reason I called it Facebook Democracy was that a lot of the campaign by India Against Corruption was actually carried out on Facebook. It was the main mobilization platform.

Since then, Twitter has been used by politicians very effectively to drive their messages. Many politicians and political parties have taken professional help for that purpose. All of us know the power of #pappu and #feku campaigns. While penetration of Twitter is still miniscule compared to the size of  Indian electorate, some politicians have managed to have a very large fan following, going up to more than a million. And there are at least ten Indian politicians on Twitter who have more than a lakh followers. Considering that not more than 100 million Indians are online, those numbers are not unimpressive.

Unimpressive they may not be. But as it turns out, most of these followers are fake.

Social media management platform maker, Status People, actually provides a way to check your (and others’) fake followers. I actually checked out the the fake followers of the top ten Indian politicians on Twitter by number of followers and checked how many fake followers they have. 

And can you imagine what the average looks like?

It is 59%.

That is, as much as 59% of the followers of these politicians on Twitter are fake. And typically, the bigger the number of followers, the bigger is the percentage of fake followers, though there are small exceptions.

Here is the table.

Politician Twitter Handle Total Followers Fake Followers (%)
Shashi Tharoor @shashitharoor 1756468 62
Narendtra Modi @narendramodi 1560092 65
Dr Manmohan Singh @pmoindia 538323 55
Sushma Swaraj @sushmaswarajbjp 447766 52
Arvind Kejriwal @arvindkejriwal 314614 54
Omar Abdullah @abdullah_omar 274937 54
Subramanian Swamy @swamy39 165408 42
Ajay Maken @ajaymaken 151118 55
Derek O Brien @quizderek 149448 38
Varun Gandhi @varungandhi80 118728 52

The numbers are as on 1st May 2013

And here are some realities.

  • Narendra Modi, the potential PM candidate of BJP, heads the list in terms of  percentage fake followers, with 65% of his followers being fake.
  • As many as 8 of the 10 in this list have more fake followers than they have genuine followers. Derek O’ Brien and Subramanian Swamy have the lowest percentage of fake followers in this list.

What Does This Mean?

This, of course, does not suggest that politicians are doing something deliberate to create fake profiles/followers. And since there is not much to choose between different parties, it is not a political statement that one is making. In fact, many politicians themselves will be shocked to know this.

For that matter, there is not too much of a difference between politicians and other celebrities when it comes to the percentage of fake followers. I did check that for a couple of them. In case of Amitabh Bachchan, 73% Twitter followers are fake. For Shah Rukh Khan, that number is 70%.  But in case of celebrities, it is a reaching out to the fans, so it does not matter how many fans follow them.

For politicians too, it is a great platform to get their message across, engage with media and at least a certain section of people, who are using this medium. The problem begins, when, their PR managers try to make us believe that they are great leaders because of the large fan following. That is when we get it completely wrong.

In fact, fake followers is just one part. The above platform, Status People, also measures how many of the followers are inactive. For each Twitter profile, it divides the followers into three parts: fake, inactive and good. When you take just those followers that it terms are good (who are real and active), the total followers number drops drastically.  Here is the above list of politicians with their “good” followers.

Politician Twitter Handle Good % Good Followers
Shashi Tharoor @shashitharoor 10 175647
Narendtra Modi @narendramodi 10 156009
Dr Manmohan Singh @pmoindia 16 86132
Sushma Swaraj @sushmaswarajbjp 16 71643
Arvind Kejriwal @arvindkejriwal 14 44046
Omar Abdullah @abdullah_omar 13 35742
Subramanian Swamy @swamy39 21 34736
Ajay Maken @ajaymaken 12 18134
Derek O Brein @quizderek 22 32879
Varun Gandhi @varungandhi80 13 15435

The numbers are as on 1st May 2013

So, in effect, Shashi Tharoor’s active are just 1.75 lakh Twitter users, not 1.75 million.  The prime ministerial candidate Narendra Modi has just 1.5 lakh followers, not 1.5 million. Varun Gandhi just has 15,000-odd  active followers.

In short, these numbers denote their actual sphere of influence. Except for Tharoor and Modi, these numbers are in thousands; in a country of a billion. And when you combine this to the fact that Twitter reaches only a certain class of people, it follows quite logically that extrapolating the influence/opinion of Twitter to the real world is not a great idea. Not yet.

1 Comment

Filed under Digital Economy, New Governance, Policy & Regulation, Social Media, Technology & Society

How Realistic is Chidambaram’s ATM Promise?

The Union Budget for 2013-14, presented by the finance minister of India, P Chidambaram, has been thoroughly analyzed by analysts, media and economists. Many have pointed out the fine prints, and there are loads and loads of analysis on what it would do to Indian economy, different sectors, and different sections of our demographics.

But in all these discussions that I have eagerly followed, I am yet to come across any comments on one of his promises: that every public sector bank branch would have an ATM by March 2014. This is what the FM said in his budget speech (see section 86)

Financial inclusion has made rapid strides. All scheduled commercial banks and all RRBs are on core banking solution (CBS) and on the electronic payment systems (NEFT and RTGS). We are working with RBI and NABARD to bring all other banks, including some cooperative banks, on CBS and e-payment systems by 31.12.2013. Public sector banks have assured me that all their branches will have an ATM in place by 31.3.2014

I know it is neither as serious a matter for economists as current account deficit nor as interesting for everyone as an all women’s bank branch. It does not impact as many people directly as the tax slabs; neither does it have enough controversy in it to deserve comments from politicians.

Yet, this part of the speech got my natural attention, when I was listening to the speech live on TV. Having been a little familiar with the current numbers—thanks to my twin interests, payment systems and data journalism (lots of my tweets are around these numbers)—I was finding the target a little too ambitious. 

So, I got into some extraction of numbers and a quick analysis of those numbers. And here is what the FM’s promise translates into. 

By the end of March 2012 (that is end of FY 12), India had 67,466 PSU bank branches. That may not be such a huge number when seen in context with Indian population. But the number of ATMs that were attached to some of these branches (called onsite ATMs in Indian banking parlance), were much less. All PSU banks together had only 34,012 onsite ATMs. That number, of course, increased to 36,767 by December 2012.

The public sector banks have, on an average, added a little more than 3500 branches per year in the last five years leading to FY 12. So, even by a conservative estimates, the PSU banks are likely to have not less than 72,000 branches by the end of March 2014—the reference date for the FM for all of those branches having an ATM.

So, going by the current numbers, 35, 233 onsite ATMs need to be added between 31 December 2012 and 31 March 2014 (15 months) for all the PSU branches to have an ATM. That is almost doubling (96% growth, to be precise) the onsite ATM base in PSU banks.

Do you think it is realistic? Especially, when you consider that between March 2007 to March 2012, they have added 23,723 onsite ATMs. And there is no major acceleration considering in the nine months after that—that is between March 2012 to December 2012—they have added only 2755 onsite ATMs.

So, there are only three possibilities. One, I am terribly wrong somewhere. Two, there is something happening inside which we don’t know. And three, the FM has just been carried away without caring too much to be realistic. After all, it is an election budget.

The first possibility is inconsequential. The second possibility calls for a celebration.

The third possibility is  a dangerous proposition. I thought whether the Budget is good or bad in a year, at least the basic arithmetics gets done to put the ends together. 

There is one more probability. Maybe, the FM was wrong but only technically. Maybe, he meant that for every branch of PSU bank, there would be an ATM. What it means is that the number of PSU branches and no of PSU ATMs would be same, irrespective of where those ATMs are located. If we go by that number, the total ATMs (both onsite and offsite put together), they have 63, 739 ATMs. That means in the next 15 months, going by the same estimated number of branches (72,000), they need to add 8261 ATMs, slightly aggressive going by the last five years’ numbers but not exactly unrealistic.

So, the FM’s speech should have read

Public sector banks have assured me that for each of the branches that they have, they will have one ATM in place by 31.3.2014

And that is no less laudable goal to have. Since the FM talked about the ATMs in the context of financial inclusion, how does it matter if the ATM is “in the branch” or anywhere else?

1 Comment

Filed under Banking, Digital Economy, Inclusive India, Indian Economy, Technology & Society

Can e-Literacy and Illiteracy Co-exist?

In the midst of a slew of big-bang reforms announced by a recharged government in the last few days, a small but crucial cabinet decision has escaped everyone’s attention. The cabinet has approved the National Policy on Information Technology 2012.

And those who have covered it have highlighted either the big numbers—targets of three fold increase in IT industry size from $100 billion to $300 billion by 2020, creation of a 10-million additional ICT manpower pool—or the more ideological stances such as commitment to accessibility and open standards and open technologies.

One point that has gone largely unnoticed is the the goal of making at least one e-literate individual in every household. On the face  of it, it is very well-intentioned. Unlike IT industry size and open standards, this is something, when achieved, would benefit the common people directly.  As more and more government services become available electronically, a better comfort level in accessing those services directly without the help of any middleman will not just be more convenient for common people, it will give them a greater sense of power.

But there are many questions that need to be answered. Unlike a lot of other points, the policy document does not go into any more details on this.

So, what is e-literacy? How do you define it? How do you measure it? It is a laudable idea but is it practical to have it at a goal? And especially in a country with such a high illiteracy rate? What are the broad possible paths to proceed towards such a goal, even if we do not have exact answers to all the questions, right in the beginning?

(To set the expectations right, I am not really trying to answer these questions, but am raising them to set a broad agenda for discussion)

For one thing, it is good that the policy has used the phrase e-literacy and not the dated term computer literacy. We have gone past the era of computer. “e” is no more synonymous with computers.

But that very fact also means that we have to start with basic definition. The definition of e-literacy is still vague. In fact, the more used term in the international forums is the phrase “digital literacy”, which I believe, by and large, represents the same idea, as opposed to something like “computer literacy” or “media literacy” or “internet literacy” which are somewhat restricting.

The simplest definition of digital literacy is, I believe, the Wikipedia definition—the ability to locate, organize, understand, evaluate, and analyze information using digital technology. It involves a working knowledge of current high-technology, and an understanding of how it can be used.

The question is how to create measurables, action plans, and monitor the progress. Going by the international practices, the approach has mostly been through embedding it with traditional education or through integrated small programs. Both could be effective but the first approach is restrictive, as it excludes a large part of the population. But not impractical considering the goal is to have one individual e-literate per family. Integrated small programs are not scalable in a country like India and the progress is difficult to measure.

The challenge before India is that every one out of four people are illiterate. Going by the latest Census (2011) figures, the average household size in India is between 4 to 5. This, in pure arithmetic terms, means we have to make one-fourth of the population e-literate. However, since the current level of comfort with digital technologies and Internet is fairly high in a section of people in urban areas, the task of making at least one person e-literate is far more challenging than just achieving a number.

I believe  RBI’s National Strategy for Financial Education can be a good reference to start with as it addresses the question holistically; some of the challenges are similar; and the plan takes into account the Indian realities. In fact, it is not a bad idea to find the synergy between the two plans. Because, at the core of it lies a desire to achieve inclusion.

While today, no social inclusion is possible without financial inclusion, tomorrow, the same can be said about digital inclusion. Without digital literacy, there cannot be digital inclusion.

If we are starting now, we must take a holistic approach that takes into account the socio-economic factors while formulating any plan of action for e-literacy.

I am happy that the government has considered this to be important enough to include it as an objective in the National Policy on IT.

Leave a comment

Filed under Digital Economy, Inclusive India, Indian Economy, Technology & Society

Sibal’s Supercomputer Dream: Putting It in Perspective

So, Kapil Sibal has decided on his new obsession for the next few months. And this time, it is not a still cheaper tablet or, for that matter, a new mobile operating system to challenge Apple or Android. The lawyer-turned-minister has set his sight on nothing less than building the fastest supercomputer on earth. Sibal has reportedly written to the Prime Minister detailing a plan prepared by the state-owned research & development outfit, Centre for Advanced Computing (C-DAC) to achieve this feat by 2017, at a cost of some Rs 4,700 crores.

Critics may link it to the speculation of his possible removal from HRD ministry in the imminent cabinet reshuffle. But to be fair to Sibal, reshuffle or no reshuffle, he is never short of big ideas.

Kapil Sibal is a dreamer. That is a good thing. Few of the politicians at his age are. And we surely need a few of those dreamers.

But that also is his problem. He still has the hangover of his extremely successful past as a lawyer, and often has excessive confidence in his own ideas and abilities. So, even when his basic intention is laudable, it is seen as maverick-ism. While he has the ideas, the dreams, the passion, and a rare sincerity of approach, he lacks the vision to realize those dream. All his dreams, from Right to Education to low cost laptop, are low on vision. They lack a practical approach (that is they do not take into consideration the ground realities), but more importantly, they are not aligned with the shared vision of the government. So, depending  on who is talking, these ideas get dubbed as wishful thinking to megalomania, and a lot many things in between.

Take the Aakash tablet.  Using affordable technology to enhance education quality is a great dream. The government stepping in to help in whatever way possible to the private sector to make that happen is also a good approach. But why should the government align itself to a single brand? A single project? No one could explain this to Mr Sibal.

Now comes the supercomputer dream. While we don’t know the exact details of the “blueprint”, based on whatever media has reported, it already sounds flawed. Here is why

1. The focus is purely on speed. It is a petaflops speed supercomputer that the minister and C-DAC want to build. The application is secondary. While performance is not a bad objective to have, spending Rs 4700 crore to just be on top of the table sounds a little too much. I am still refraining from dubbing it megalomania. But will not quarrel with those who do.

2. Why do the vision and nuts-and-bolts have to come together? Why should it be assumed that C-DAC will build it? The same question was asked in case of Aakash. No bias against C-DAC. They have great capability. But for those interested in facts, though, C-DAC Param has not featured among the top 500 supercomputers in the last two years.  But should not a thrust on high performance computing through policy initiatives be a better approach to encourage the building of such supercomputers than adopting a project?

3. The focus is entirely on the speed of one supercomputer. What India needs is many such supercomputers in all aspects of our economy: oceanography to identity verification; drug research to weather forecasting. Just for the record, in the last six-monthly list (June 2012) of world’s 500 fastest supercomputers,  India had just 5 of them, up from 2 in November 2011 list. In contrast, China had 68. And we thought China just scores in physical infrastructure, India is the IT superpower!

In 2007, China was just slightly ahead of us. In November 2007, China had 10 of the world’s fastest supercomputers, while India had 9.  In June that year, China had 13, while India had 8.

See how we compare now.

At one time, China was just a little ahead of India. Now, it has overtaken Japan to be at No 2 position

China comparisons apart, India’s supercomputing journey has not been anything laudable as such.  Based on the Top 500 data, India’s share of fastest supercomputers in the list has not really increased. The average number of India’s supercomputers in the Top 500, between 2003 to 2007 was 6.8, with as many as 11 featuring in June 2006. Between 2008-2012, that average has come down to 4.4 per list, with highest being 8 in November 2008.

INDIA’S PERFORMANCE

MONTH NO OF SYTEMS IN THE LIST RMAX (TFLOPS) TOP RANK ORGANIZATION
Jun 2012 5 303.9 58 CSIR
Nov 2011 2 132.8 85 Tata Sons
Jun 2011 2 132.8 58 Tata Sons
Nov 2010 4 132.8 47 Tata Sons
Jun 2010 5 132.8 33 Tata Sons
Nov 2009 3 132.8 26 Tata Sons
Jun 2009 6 132.8 18 Tata Sons
Nov 2008 8 132.8 14 Tata Sons
Jun 2008 6 132.8 9 Tata Sons
Nov 2007 9 117.9 4 Tata Sons

Even the top performance has not seen any great improvement. In November 2007, the world’s fastest supercomputer was about four times faster than India’s fastest supercomputer. In June 2012, that ratio had increased to 54. Yes, the fastest supercomputer on earth was 54 times faster than the fastest in India.

Indian computing has not been able to keep pace with the world.

All this is not to suggest that Indians do not have capability to build a fast supercomputer or the dream to build a fast supercomputer is misplaced.
Far from that, India needs a lot of effort in high performance computing so that Indian companies and organizations can build many such supercomputers for application in all areas. The country badly needs that. And if the fastest one happens to be Indian, that would be an icing on the cake. The icing, however, should not be confused with the cake!

Leave a comment

Filed under Digital Economy, Technology & Society, Uncategorized

Do We Need a Wikipal?

Politically speaking, the decision by Anna Hazare to disband Team Anna and take up the fight through electoral politics is both good news and bad. Good because it upholds the supremacy of electoral democracy—which was being pooh poohed by the likes of Arvind Kejriwal just a few months back. But what I feel bad about—and I am not a supporter of M/s Bedi and Kejriwal—is that an experiment to take an alternate route has failed. Yes, despite the power of Facebook and Twitter to support it this time around, as many of us never forgot to add. But as I have pointed out many times, including in this blog (Digital divide is now political…), reach of social media—or for that matter Internet—is too limited to fight a successful battle against the government and the system.

So, do I mean to say that we have little hope—as long as we have a democracy and as long as we, as a people, are not completely honest, we will have to tolerate this large scale, systematized corruption?

Not necessarily. But if we have to really find a solution to the problem of corruption, it has to be taking a different approach that fundamentally changes certain key parameters, not taking the same path again and again.

The big proposed entity called Lokpal—that is thought to be the panacea for all ills by Anna and his supporters—is nothing but yet another costly addition to the already overburdened system. If the Legislature, Judiciary, the Executive and the Press could not do it, by what logic do we expect yet another new body with its base in Delhi to eradicate corruption? The demand has been for more power to it. There is no logic given by anyone how more power itself will translate into more effectiveness in checking corruption. After all, the members would be people from amongst us. Why should one believe that they would be more honest than you and me—and our politicians and bureaucrats?

The problem is we are seeking a solution in the old, centralized model with a set of people having absolute power over everyone. Just that instead of being called MPs or ministers or secretaries or editors, they would be called Lokpals.

What if we take a fundamentally new approach? Instead of trying to check corruption by instilling fear of punishment after the act of corruption is done, what if we can ensure that corruption is minimized by making it more difficult to do it, i.e instilling fear of getting caught while doing it. That should be done by bringing in transparency.

Such transparency is possible only when there is easier access to information by a wider section, ideally members of the general public.

Two fundamental principles are cornerstones of this approach: one, instead of centralized systems, we go for decentralization; and two, instead of giving real power to people, we give it to a computer system.

Decentralization does not necessarily mean chaos. Wikipedia—despite whatever limitations it may have—has shown us how the collective power of people can be credible and dynamic at the same time. But to ensure that we prevent mobocracy and chaos, there has to be defined rules and processes (as in Wikipedia) and massive information infrastructure to store, forward, and process information. That requires a powerful (ideally distributed) computer system.

As everything becomes available to a wide set of people, the system will ensure that few dare to indulge in violating the rules. A person may not fear another person; but everyone fears the public.

We have seen that happening in cricket. The third umpire—though there is a person whose name is associated with it—is actually a computer. The replay is on a huge screen for the world to see. And technology ensures that there is no intended wrong decision. The same principle will work here.

In a technology-enabled system, the information itself will have the power to make everyone exercise restraint. A huge computing plaform—lets say a supercomputer—can, on a continuous basis, monitor for exceptions. There can also be ways and means to lodge anonymous complaints by the whistle blowers. Initially, people may misuse it to trouble opponents, but soon, the system will take care of itself. If the processes and technology are good, a false complaint will result in calling out the bluff. By moving from an investigation mode to a prevention mode, the system itself will become more “less corrupt”. There will be experts, advisors, information analysts—from any walk of life. But the power will not lie with them; it will lie with the computer in particular and the whole system in general. The system would be fault tolerant and designed to learn from experience.

Of course, any such system can be effective only when there is a lot of information generated electronically. That means a lot of government processes need to be automated. Thankfully, that is increasingly happening. That will supplement the reactive mechanism of the Wiki model by a proactive check on processes and exceptions. In such a scenario, RTI would be seamless and would be like a Google search.

Instead of instilling fear of punishment after the corrupt act is done, it would instill fear of exposure while doing corruption. So, not only would one get caught but would get caught before he/she gets any benefit out of that.

Such a system will ensure the following

  • Any exception is caught and reported, almost in real time
  • All information is stored in multiple locations so if something that cannot be brought to public notice in real time because of sensitiveness of the issue, they can be exposed in future by the system. Remember WikiLeaks?
  • Ensure speed and efficiency in addition to transparency

By making all the citizens participate, we would give collective responsibility to everyone while rigorous processes with technology underneath ensuring that there is no chaos because of that. This will still not be able to eliminate corruption but will make it far more difficult to do corruption, thereby significantly reducing it.

What do we call such a system? Did you say Wikipal?

2 Comments

Filed under Digital Economy, New Governance, Policy & Regulation, Technology & Society

Will Aadhaar be the Same with PC as FM?

Well, if media reports have to be believed, P Chidambaram, the Union home minister is all set to return as the finance minister. It does not sound too surprising, considering PC has been one of the best finance ministers that India has had in the recent past. What is more, his track record in the home ministry has been anything but spectacular.  Not only has he failed to achieve much, his tenure has seen continuous friction of his ministry with the states.  In short, his transfer from the finance ministry to home ministry has neither been good for the economy of the country nor for its politics. So his return should be good news for most.

Except those strongly backing UID/Aadhaar.

His dislike of the project—or rather the way it is being rolled out—is well-known. Not only has he disagreed with UIDAI’s way of collecting data, he has written to the prime minister multiple times complaining about it. It is in his insistence that the cabinet discussed in January the possible security loopholes in the way UID was collecting data and decided that while  NPR and UIDAI would use the biometric data collected by each other, in case of discrepancies between UIDAI and NPR data, NPR would prevail.

Again, as recently as last month, he had written to the PM that UIDAI was not cooperating with Registrar General of India (RGI), which was working on the NPR. This is what Mint had reported, quoting from the letter.

“The decision of the cabinet is crystal clear and I am unable to comprehend the reluctance of UIDAI to allow the NPR camps and to accept the NPR data. I had taken these issues with Nandan Nilekani, chairman, UIDAI, dated 14.05.12. The home secretary (R.K. Singh) has also discussed the issue at length with the UIDAI director general and mission director. However, despite our best efforts, issues remain unresolved,” he said.

It is difficult to believe that once he takes charge of finance ministry, his opinion about the Aadhar project would change drastically.

The question is: will it impact the effectiveness of UIDAI?

While it is true that UIDAI is part of the Planning Commission, the reason it became the government’s flagship program so soon is because of strong support from the former finance minister Pranab Mukherjee. Not only did Mukherjee generously provided for the funding of the project in three of his budgets, he made it the basis (aadhaar) of most of the government programs. There were nine reference to Aadhaar in Mukherjee’s budget speech this year. Whether it is for subsidy being credited directly to beneficiary’s bank account, creating a more efficient public distribution regime by creating a PDS network, or for disbursement of government payouts—such as MG-NREGA payments, pensions and scholarships—the finance minister seemed confident that Aadhaar could be leveraged as a platform to deliver. National Payment Corporation of India (NPCI) even created the Aadhaar Payment Bridge Systems.

In short, while the UIDAI chairman Nandan Nilkeani created a new generation platform in form of Aadhaar, it is Mukherjee who was instrumental in making it the flagship platform of all developmental activities in India. So much was Mukherjee’s liking for Nilekani that he made him head some half a dozen task forces, groups, and committees entrusting him with most changes. I wrote about it in a post in this blog earlier called The Importance of Being Shri Nandan Nilekani. Mukherjee had even gone to the extent of openly backing Nilekani on PDS reforms when the food ministry was ignoring the recommendations of a committee headed by him.

From there, it would be quite a change for Aadhaar/Nilekani if Mukherjee is succeeded by someone who very recently complained so strongly about the project to the prime minister, taking the name of its chairman.

Things would probably have been a little different had the UIDAI been a independent statutory body. A proposal to make it one was rejected by a Parliamentary Standing Committee headed by Yashwant Sinha a few months back. Interestingly, in its report, the Committee had extensively quoted news reports about the home ministry’s objection to/criticism of Aadhaar to justify its decision.

Both Chidambaram and Nilekani have proven track records. The country will benefit if they work in tandem. Another conflict in the government is the last thing that we want in the time of this apparent policy paralysis. Not only will it make another fresh and fairly successful experimentation in the government go astray, any drastic change in the path will make very wrong signals to international community. After 2G decision and GAAR, the last thing the country would like to see is going back on UID plans.

1 Comment

Filed under Digital Economy, Inclusive India, Indian Economy, New Governance, Policy & Regulation, Technology & Society