Why did we change the indicators?

Times change and the FSI needs to adapt to those changes. TJN undertook an open consultation of the FSI methodology with its stakeholders between January and March of 2016. The survey was answered by 136 people including experts, users and officials of ranked jurisdictions from 49 different countries, including developing countries, OECD member states and small island financial centres. They answered questions on which indicators to keep (including those that do not integrate the secrecy score, e.g. whether tax evasion is a predicate offence), which to add and which to modify. We assessed all the responses, while also considering our own resources to actually undertake those new analyses, and whether sources existed. For example, while we sought to include an indicator on whistleblowers protection, a lack of comparative data kept us from adding it this time round as a key financial secrecy indicator (KFSI), meaning in the 2018 FSI it doesn’t have any influence on the secrecy score of a country.

There are many examples on why indicators changed and why new ones were added (you can see the new methodology here). Let’s take beneficial ownership (BO) registration of companies. While this indicator had always been assessed, it wasn’t until 2015 that some real progress took place. By 2018 it wasn’t enough to check whether beneficial ownership of companies was registered or not, but the advances of some countries, especially in the European Union, allowed us to differentiate, within those that have BO registration, whether they are using a low or high threshold in the definition of “beneficial owner”, or whether they have a special clause for senior managers, in cases where no BO is identified. A similar thing happened with foundations, and so that indicator became more detailed too. Likewise, more countries started to offer ownership information in open data format, so this also had to be reflected in our “online” indicators, since open data (e.g. free, machine-readable, etc.) is better than offering it merely for “free” (e.g. it may be free but a non-machine-readable photo of a document). Another example where a lot has changed refers to automatic exchange of information. Back in 2015 we merely knew which countries had committed to implementing the Common Reporting Standard. Now in 2018 we know how many countries signed the Multilateral Competent Authority Agreement (MCAA), whether they had chosen voluntary secrecy (being listed under Annex A of the MCAA), or whether they were imposing extra conditions and how many activated relationships they had in place.

As for new indicators, we realised that the secrecy score wasn’t covering all transparency risks. Partnerships, for example, while being assessed in 2015, were not considered for the secrecy score, even though they can also be used to hide the ownership of an individual, similar to companies. Hidden ownership of real estate and of assets stored in freeports are further important aspects of financial secrecy which we have now covered by the indicators.

Did we choose new indicators to manipulate the results?

Even if we wanted to, it is technically impossible. The FSI is an enormous research project, the outcome of which is unknown before it is completed. Most decisions to change indicators were taken in 2016. We publicly announced many new indicators at the beginning of 2017. Now, in order to have known all the results beforehand assumes that we knew how each of the 112 jurisdictions would be rated, even though a lot of the data wasn’t available until later in 2017 (e.g. some EU countries took longer than June to publish their new BO laws and the Global Forum published new peer reviews – one of the main source of some of our indicators – as late as October 2017).

To give you some idea of the work involved to assess even part of one indicator, e.g. banking secrecy, we have to read at least 20-30 pages of a peer review report for each jurisdiction, and sometimes additional reports. We need to read legal texts, and if we have doubts, discuss among ourselves, directly checking local laws and even consulting with local experts - for each of the 112 jurisdictions! It takes time. We only find out what the final ranking will look like maximum 3 months before, and this time around only about 8 weeks, before the FSI is published (it takes us far more than a month to prepare the layout of reports, do cross-checks, write media material, etc.). 

In order to manipulate the FSI ranking by adding or removing indicators, we would also need to consider the other component of the formula: the Global Scale Weight. In that case, we would need to know back in 2016 how countries would change in their market share of financial services for non-residents, data that the IMF only published in the course of 2017.