GSIM

This page contains an evolving chronicle of the latest desperate attempt of the international metadata circuit to develop a common reference model. This attempt is called “GSIM”, and has been around for some time now.

The HLG-BAS strategy group has suddenly realised that GSIM is a key project for the entire circuit. Since too little has happened since the project begun, they are now attempting a “sprint”. That is desperation for you. The first sprint takes place in Slovenia, the 20 th February 2012.

Version 0.1 of the GSIM was released in June 2011. It was a pathetic copy of a seven year old common model (from the COSMOS project), which never was intended as a common model for anything else than the COSMOS trials.

The first meeting of the GSIM project was in June 2010. In other words, after 1,5 years of not getting anywhere they have to use a “sprint” to try to get somewhere. This, despite claims about urgency, priority and “pace and passion”, already from the start of the GSIM initiative.

What is the first and most important methodological failure of this GSIM sprint?

Answer: They have failed to evaluate and address the reasons why similar projects have failed before – and such projects have failed more than once; if not five, or ten, or fifteen times, if we include all similar attempts by individual NSOs the last ten years, or so.

What they really should do is call a conference with the main experts from earlier attempts. This would include Bo Sundgren, Karlis Zeila; experts from Metanet, COSMOS, Neuchatel, METIS framework, and SDMX; and from at least two NSO projects, for example the UK (2003) and Denmark (recent).

There is some new revealing information. The HLG-BAS has now given a reason for the earlier failures of GSIM. It can be found in the material provided for the first GSIM Sprint. They now admit that there has not been “enough power in the groups to make change really happen”.

“Power”? That is a strange word to use. Do they mean competence? Probably. Do they also mean real will to achieve results? Most likely!

My God, they should know that they will never achieve anything, since the main purpose with the international metadata circuit always has been to not deliver. Actual results would be a catastrophe for everyone involved. I mean, imagine if the gravy-train suddenly disappeared. We cannot have that, can we?

And where does the new power supposedly come from? At the first GSIM sprint we find Klas Blomqvist from Statistics Sweden. Now, that is what I call being underpowered. And Jenny Linnerud? She has been at this since Metanet. She has had plenty of time to solve these issues. Has she delivered antyhing? No, she is the quintessential secretary, unable to be creative or take initiatives of her own.

In other words, the HLG-BAS has, already in its choice of “experts”, made certain that there will be no progress from the first GSIM sprint.

We can note another piece of revealing honesty. In the very same analysis, the HLG-BAS also admit that they may have exaggerated the degree of change in official statistics.

Now it is suddenly an open question if there is an evolution or a revolution. They even admit that the “product set” is stable. This was my immediate reaction to the Dutch talk about revolution and industrialization. My guess is that this blog has made an impact.

So, now the sprint has begun! (And it is described in a day-to-day blog).

After the first week, failure is staring them in the face, we can learn from their own blog. They have realized that they started modelling without knowing what they were doing. By the way, this is what they were supposed to clarify by day 3. Instead, the only output after three days was a list of “success factors” that could be applicable to almost any large scale systems development or standardisation exercise. Going nowhere fast…

The way that this failure has been expressed on the day-to-day blog is that the diagram that explains the HLG-BAS vision is insufficient. This is the one with “industrialization of statistics” in the middle and then four corners. “GSBPM”, “GSIM”, “Methods” and “Technology”. The GSIM sprint now claims that what they need is an additional high level “model”. Now, that is confused for you. Two things:

1. The HLG-BAS diagram is incompetent. You cannot mix a general concept with a product name. Instead of GSBPM there should be a concept that is similar to “Methods” or “Technology”, for example “Process model” or “Reference model”. (Not necessarily those concepts, it is a basic principles of conceptual work that I am trying to illustrate).

2. If you have a “GSBPM” and are trying to develop a “GSIM”, and you feel that you are lacking something, what is it that you lack? This was discovered already in 2003 and is the very basis of this blog. It is the system, stupid! You do not need a more high-level model, you need a system description, i.e. a “GSIS” (Generic statistical information system).

By the way, this is exactly what the First Framework published on this blog is all about. That is where the term GSIS was launched for the first time (and has since been used by the Koreans, who, by the way, have sent representatives to the first GSIM sprint).

Put differently, to the detriment of the tax payers, the UN and NSOs have blatantly disregarded the message from this blog; and also experiences since at least nine years back; and now they have to pay the prize.

No, I take that back. You and me, as tax payers, have to pay the prize. For buffée dinners, hotel nights, weekend tourism, and happy back-slapping.

And then it happened! After I wrote the sentences above the GSIM sprint has admitted that the missing link was a system. Halleljujah!

In their final document they chose to call this a generic statistical production system. Fine, but not fine. GSIS is better, because the system in question is not necessarily just a production system. The term needs clarification, and following such clarification the insight is that GSIS is a better term.

Here is the irony. This was added towards the end of the sprint, and there was never any elaboration of, or concrete description of this system. Yet, they happily spent two weeks modelling – yes, what? This production system? Maybe, but how could they do that if they did not have a more elaborated, concrete or specific system definition during the actual sprint?

One answer to this is that they have not been modelling at all. They have been shoving around terminology from earlier models, and have ended up grouping them in a set of high level boxes. These should really be high level objects or object groups, but are nothing of the sort. Instead, they are aspects of the grouped concepts, i.e. not their essence as objects (conceptual, structural, etc, are not objects or object groups, but aspects of objects).

If we compare the new GSIM with GSBPM it is also immediately clear that the sprint has failed to produce a high level that is compatible with the GSBPM.

In fact, it is quite simple to use the GSBPM to derive a set of high level objects or object groups. This, the GSIM sprint has completely failed to do. (On the other hand, such an exercize reveals that the GSBPM has a strange and rather detached and lofty use of terminology).

This is all very interesting, and this blog will actually soon provide a competing set of high level GSIM objects (or object groups). A “Shadow GSIM”!

For now, lets reflect on the fact often mentioned on this blog that a generic statistical production system has existed for several years; and at least Latvia and Cyprus have been running such a system in full operation. Two questions:

1. Why was there no mention of the Latvian system at the GSIM sprint?

2. How did the Latvians manage without a GSBPM, GSIM or sprint?

First, let me use a metaphor!

To solve the issues at hand you need to do two things. First get out of the forrest and then cross the river. A model such as GSIM, should it be successful, will still not solve the most important issues. It is too abstract and one-dimensional for that. To cross the river requires something more. However, GSIM could help the circuit find its way out of the forrest. Has this happened?

No, what has happened is that the first GSIM sprint has collected everyone running around in the forrest in one place, all grouped together. At the same time, they lack a map. All they have is a small clue, which they are still uncertain about. This clue is the term “generic statistical production system”.

So, lo and behold, these are the real results of the first sprint. Unification, in terms of flocking together, and a vague clue.

Of course, none of this is new. It has been milled over and over again on this blog for some time now. It is the system, stupid!

Do you think these people will find the way out of the forrest, to the river bank? Do you think they will be able to cross the river where everyone has floundered? (Including, by and large, the Latvians, precisely because they developed a generic statistical production system and not a GSIS).

Stay tuned to this page, as it develops…

They have now started planning Sprint 2. This sprint will be in Korea.

Let us, meanwhile, do the work that they failed to do during Sprint 1. To provide a high level model that is compatible with the GSBPM.

If we try this we immediately realize that the GSBPM level 1 is not really a model for statistics. All the high level terms are so general that they can be applied to almost anything that is being collected, analysed and then archived. For examples, stamps or dried flowers. Hence, it is very difficult – or simple – to produce a high level GSIM to match it.

Specified needs, a design, a build, a collection, processed collection, finalized outputs, disseminated outputs, archived outputs, evaluation and action plan.

There it is, the high level GSIM model that matches the GSBPM! Worthless? Yes, pretty much, because of the level of generality. If nothing else, the use of the term “output” exemplifies how the GSBPM has failed to produce a high level specific to statistics, and indeed specific to anything. Output is a term used in the modelling language for process modelling, and therefore not a term that should appear in any one concrete process modell.

Let us be a bit more kind. The next level is better terminologically. If we use it to derive a more specific high level GSIM, what would we get?

High level objectives, concepts, business case; survey design; survey build; collected data; finalized data files; finalized outputs; disseminated outputs; archived outputs; evaluation, action plan.

Does this make us more happy? No, because too much of this is of little importance to the concrete modelling of objects.

Another example of the lofty use of terminology in GSBPM is sub process 1.3 “Establish output objectives”.This process potentially contains the most important of all statistical objects, yet there is nothing in the process description that tells us what that object is.

Also, if we dive deeper into the GSBPM process descriptions we find a confused and incomplete use of terms to cover basic objects. For example, we can find the term micro data but not final observation register, and we find “statistics”, even if this term is never used in the names of the level 2 processes.

Clearly, the GSBPM and the GSIM are still substandard compared with several of the early models produced by Bo Sundgren. Models which, by the way, were applied by the Latvians when they developed the first generic NSO statistical production system.

Put differently, the GSBPM-GSIM so far is completely unnecessary. These issues have already been addressed and solved, a long time ago – allthough, nota bene, without solving the issue of finding truly generic models. The Latvian system is a generic production system, but its models are not sufficiently generic, complete and standars-compliant.

The second GSIM sprint has recently been concluded (two weeks in Korea ending the 27 April). The “as it happens” blog turned out to be completely void of any substantial information. Apparently, the leaders are now attempting to protect the process from external criticisms. I am not surprised!

Thus, there is little to say before they publish their “results”. Have they been able to find a way out of the forrest, towards the river? Nobody knows! However, we were provided ample information about their buffé dinners and tourist excursions. (The real objective of the GSIM sprint).

One point on their blog is worth mention. Apparently, they had problems deciding if the model should be for “communication” or an a par with the level of detail in SDMX and DDI. Of course, a real reference model must be on the level of those two, stupid!

This becomes unclear because they have failed to define the system and therefore also the objectives of their model, before modelling.

They have also published a slide about “industrialization” of statistics. This time the concept only means sharing software based on a common model. The initial use of this concept seems to have been dropped.

Do I have to say: “I told you so”?

A final question! If SDMX and DDI provide a basis for shared software, how can a model that is less detailed provide a similar basis?

The impression is that the whole GSIM process is becoming more and more unprofessional and glib. These people are now embarassed by what they are doing (or, rather not doing) and are trying to gloss over their mounting feeling of shame and inadequacy.

Lo and behold! Now we know what has been going on. GSIM have published feedback to sprint 1. It was devastating, to say the least. With the exception of one or two self-interested psychofants, the general message was: we do not have a clue what you have been doing and thus we can not even evaluate it.

Take this example:

“What do you mean when you say “GSIM provides the information object framework for the complete GSBPM”?  That’s ‘motherhood and apple pie’: it sounds great and gives everyone a warm and fuzzy feeling, but at the end of the day it doesn’t really answer anything.”

Or this:

“Frankly, the picture contains some partial artefacts (randomly) selected from the statistical methodology and statistical process with different levels of detail (why just those?). The scheme mixes statistical service with a possible future classification of objects… Frankly said the scheme has no information value.”

And so it goes, on, and on, and on, in a large Excel spreadsheet.

In other words, sprint 2 was busy swallowing and trying to do something about this massive critique.

I fell like Obi-Wan. I have trained Luke Skywalker well. The community seems to be warming up to the same type of criticisms that I have presented on this blog for a long time. Business case! Clarity! Relevance! Compare with existing models! All this can be found in the feedback to sprint 1!

Of course, this blog is also impartial and fair. There are some positive things to be said about the results of sprint 2. The top level of the GSIM model is still a catastrophe. To have a high level object called “information” must be an all-time low, even for the international metadata circuit. But! The next level is pretty OK. There is somebody involved here with a sense of elegance and economy, in fact even relevance. Unfortunately, the influence from destructive second generation models is still there, which means that this is going nowhere.

Just as the top level boxes were aspects rather than object groups, the division between “conceptual” and “structural”, when describing data, are not objects or true concepts, they are theoretically misinformed architectural solutions to the issue of normalization. And, if you know what you are doing, they are also completely unnecessary in a real-life model.

The international metadata circuit needs to shake of the descructive influence from ISO/IEC 11179 and Dan Gillman.

In the same destructive vein, we can note that the “production system” approach seems to have been abandoned during sprint 2. This is logical, since it was the only real result of sprint 1.

More important than all this is something else. It now seems clear that the GSIM will not attempt to solve any real hard core modelling issues. So, it will be another worthless model that advances nothing compared to existing models. As before, the only real new development is the organizational focus and critical mass. That is interesting – for a very important reason:

If the GSIM fails, then it is final. One can no longer blame resources, lack of co-operation, organizational fragemtation, etc. This is it! One strike and you are out! The metadata circuit will not be able to bounce back from this one.

Let me conclude by updating the metaphor. The group has been assembled in the forrest. However, those in charge have decided that the group is not going to leave the forrest. They have also decided that the only clue about how to get out of the forrest should be burried and forgotten. At the same time, they have been busy drawing a pretty picture of the trees in the forrest.

There is a new SAB newsletter, for May 2012. The tone has considerably changed, as has the level of the contributions (for the worse). Another statistical metadata activity grinding to a halt. Why? It seems that there has been considerable criticisms about a lack of a clear business case for sharing of statistical software. It seems that the message from this blog finally is hitting home. The high flying metadata frauds are starting to take som serious Triple A.

At the same time, things seem to be back to normal in the GSIM project. It turns out that the new and improved expertise suggested for the future of the GSIM project consists of Dan Gillman for “conceptual” work and Jenny Linnerud. Why not ask these two what happened to CRM and Neuchatel for variables?

We also find Chris Nelson suggested as a consultant for the elaboration of a technically correct final GSIM document. Go, figure! He is of course more than capable of that, but there are limits to what he can do when the material in intself is shoddy. And he does not have the best of records with regards to an understandable documentation for the average user, viz. SDMX.

I have always wondered. If they were so satisfied with SDMX, why did they not just pay Chris to also do the GSIM?

A small aside.

These quotes are from an article by a famous legal anthropologists. She discusses the culture of international co-operation in the area of negotiation.

“What is so powerful about professional cultures is their built-in protection against participating professionals examining the underlying assumptions of their trade … They write more like ‘true believers’, avoiding controversy even at the cost of self-reflection.”

In fact one of the most important present-day scourges is international bureaucracies of this kind, without any real accountability or respect for tax payers money.

(See Nader, Laura. “Civilization and its negotiations”. In: Law and Anthropology – A reader, Falk Moore, Sally (ed.). Oxford: Blackwell publishing, 2005, 330-342! The quotes are from page 341)

We are now in September 2012, and the GSIM project seems to be happily steeming on. The version 0.4 quickly became a version 0.8, which is now out for review.

So, all is fine?

Well, I would like to know why they have not published the feedback on version 0.4. The feedback on version 0.3 was something of a catastrophe. In any other environment the project leaders would have been sacked and the project would have been terminated.

Of course, in the international metadata circuit the opposite happens. The more you fail, the more new opportunities you get.

If everything really is fine, then why have they stopped publishing the feedback on their model versions?

Is not transparency a good thing, anymore?

Without any real information about the process, any longer, we have to be satisfied with looking at the results. Is GSIM version 0.8 worth the paper it is written on?

Well, let us start higher up, with the HLG-BAS. They have recently had a conference (7-8/11 2012). In the imagery produced by this group we are now told that the next step is practice. Exactly how this step is going to be undertaken is, as usual, not clear. Although we find all the usual buzz-words. So, GSIM was not a solution to practical problems, we must conclude.

In fact, this is now admitted in no unclear terms. The GSIM model is for “communication”. If it is to be used for the “systems level” it will require further extension and mapping to real standards, or at least what today is touted as such standards (DDI and SDMX).

By the way, I may have been wrong about one thing. They did not abandon the only clue they had during sprint 2. That information must have been false, or they have now changed their mind due to the criticism from this blog. Now the “generalized statistical production system” seems to be very much in play again, in the imagery produced by HLG-BAS. This is now described as the way of bringing everything together.

That this production system has not been defined then does not seem to be a problem. They have actually managed to invert the entire process. They began with the model and then added the system.

Does the GSIM lend itself to practical implementation?

The documentation has now become so thick and verbose that it is hardly accessible to anyone. Even I get bored after reading a while. Good luck with that! In fact, it seems that they are aware of this problem – which does not mean that they will be able to solve it.

It is time for another catch-phrase: “Keep it simple, stupid!”

Somebody with superior knowledge can of course also quickly detect fallacies (or suspected fallacies) in the model. For example a failure to separate grouping levels and instances, thus producing self-contradictions. Or the continued confusion regarding conceptual and representation aspects that now result in abtruse if not ludicrous language such as “objects that play the role of concepts” – whatever that means. The confusion about the variable also remains. We now have the latest addition, which is “variable, represented variable, and instance variable”. Where not the terms used in ISO/IEC 11179 or Neuchatel for variables good anymore? If so, why not? What happened to the “data element”?

I do not think that you can expect normal people to understand this, for the simple reason that the GSIM people themselves do not understand what they are doing.

The practical issues to be solved have very much to do with modelling of different levels, conceptual versus representation, versioning, etc, so as long as this confusion remains there will be no next step with solutions of practical problems.

In other words, going nowhere slowly.

What are NSI:s supposed to do with all this? Well, it seems that they are expected to read through the massive documentation and be inspired to improve their own current models.

But, that is not primarily what they have been crying out for. They want real solutions to real problems. Imagine defining the GSIM process and determining not just its more high-flying and even fictional proposed outcomes, but instead a real customer in the real world with a real demand. Who would that customer be? What would their demand be? GSIM version 0.8? Hardly!

1. They want to be able evaluate the HLG-BAS and GSIM projects, and for that they first need a real description of the generic production system, instead of just a box in an image (and a stifling document with verbose list of objects without any real organizing principle). It is the system description, stupid! (And here we mean a real working IT-system for daily use in statstical offices,)

2. They want solutions to their practical problems with existing systems. For example, they want to build a better classification database, they want a questionnaire system that they can rely on to be truly generic, they want a simpler way to document variables, and they want to integrate their existing models and systems. We find no mention of these real user needs, anywhere.

Why? Because the international statistical metadata community does not have the means to satisfy them. In fact, they do not even want to, because it would mean the end of the gravy train and the high-flying, no reality-check nonsense talk.

This blog now produces country statistics. Who reads this blog? Between 25/2 2012 and 24/11 2012, we have the following statistics.

Country Views
Switzerland FlagSwitzerland 362
Australia FlagAustralia 198
United States FlagUnited States 161
India FlagIndia 155
Netherlands FlagNetherlands 149
Sweden FlagSweden 140
United Kingdom FlagUnited Kingdom 102

Can Switzerland be explained by UNECE staff? Or other UN staff?

Cudos to the Dutch! They are the only continental non Anglo-Saxon country with a strong presence.

By the way, can anyone explain India?

There have been calls for comments to version 1.0 of the GSIM. As we all know, it was published at the end of 2012. Four months have now passed.

In that time, all the international resources working with these issues could have produced a pilot system based on a real model that really works.

What has been happening instead? I think the place to be is the GSIM version 1.0 discussion forum – if we can call six posts in four months with few or no replies a discussion.

What can we learn from these six posts? Here is an example:

“Netherlands and Norway operate with at least 4 different steady states for data e.g. raw data, clean microdata, macrodata/statistical data, published data.

How is this modelled in GSIM? There is Unit Data and Dimensional Data, but how do we model the amount of processing? Attributes?”

Here is another:

“When mapping GSIM to our information model we got a bit confused about the difference between Code – CodeItem and Category – CategoryItem.

A CodeItem combines the meaing of the Category with a representation, like in “F-female”, where female is the Category. What then is the code? “F”?

We have the same confusion for Category, above the Category is female, CategoryItem is defined as an element of a category set, would that not also be “female?

So my question is what is the difference between Category and CategoryItem and between Code and CodeItem?”

In a reply to this question about codeitem, etc, mayor revisions were suggested.

Here is a third example:

“Issue raised by GSIM / DDI mapping work:

What is a non structured data set? Why do we need it?”

What does this tell us?

1. They have still not covered the basics properly. There is even mayor confusion about basic issues. At the same time, these issues have already been discussed and even solved by others, but they could not care less. Classifications is an example of this.

2. There are mayor holes in the model. The states of data is an example of that.

3. This, in turn, suggests that they have made haste to make dead-lines without really delivering. The designation 1.0 is a fraud.

4. The individual agencies are probably short of hearing. The GSIM anounced that the model is not a real model, only a guide-line, yet they seem to be happily trying to model away based on that guide-line.

5. They have not properly co-ordinated their efforts with existing models.

6. Yet again, we recognize the tail wagging the dog and the “Frankenstein” syndrome. For example, the mess about codeitems reflects input from existing systems, especially SDMX. This is another reason to label 1.0 a fraud.

The whole purpose of GSIM was to start afresh and create a framework. The model was also supposed to be more comprehensible and user-friendly, yet not even the experts themselves understand the objects in the model. As we all know, and now also plainly can see from the “discussion” about version 1.0., they have done no such thing.

It is the system, stupid!

It is the user-requirements, stupid!

This whole mess could be over in under three months, with professional input and combined resources, yet it carries on and carries on over the decades. What you see here, in March 2013, has been going on since 1973, in individual NSO:s. The concerted international effort is now more than ten years old.

I ask myself, in this time of financial crisis, how is it defensible that tax payers should continue to finance this type of international fraud and incompetence?

The GSIM discussion forum is definitely the place to be!

The IMF have some harsh words to say about GSIM 1.0. They first, sarcastically, quote the GSIM document about ease of understanding, and then they proceed to severly criticize it on that very point!

Then someone cuts in and explains that the IMF proposed solution is not the way to go, either.

The whole thing ends with Alistair Hamilton speaking in tongues about necessary “modernisation” and Stephen Vale trying to smooth things over.

In other words, they are still bickering about basics.

What does this comfirm?

1. It confirms that the label 1.0 was a fraud. Even the whole sprint thing has been a fraud. They never managed to define what they were doing – and how can they achieve something that they do not know what it is? Instead, the whole thing has been about window dressing. Proving that they can achieve results, but doing it by cheating.

I told you so!

Take this quote, for an example:

“At METIS, and the informal workshop on CSPA that followed it, a number of people articulated that none of the existing mid-level diagrams substantially meet the audience needs articulated above. There seemed to be fairly broad agreement that the four object groups are not particularly helpful, and it would be useful to look for more useful presentations of the structure at the high and middle levels.”

The four object groups “are not particularly helpful”!

Yet, that structure is what the GSIM project proudly communicates, in one colored graphic after another! These groups are also a fundamental structure that defines the entire project, at least nominally.

I have already commented on the non-sensical and substandard conceptual work that is manifest in these upper levels of GSIM (and partly also GSBPM). These people are not sufficiently competent, nor are the methods they use.

2. Why was the critical majority view, as above, not heard or catered to during the project, or at least before publication? What this suggests is that a hard core group of entrenched countries and “experts” has been bullying the majority. That is UN democracy in action, for you!

By the way, have you read the latest best seller about decision making? The noble prize laureate Daniel Kahneman?

In his view, the worst thing one can do is to put everyone together in the same room, the way that the GSIM sprint (and the discussion forum) have done. This only produces confirmation of early failures and silences the best individual initiatives.

What is the right method, according to Kahneman?

Someone, who is neutral and objective, should interview each one of the dedicated experts, individually.

This is something that I have already proposed on this blog. The former leading experts of earlier similar projects should be interviewed – not by UNECE, as I suggested before, we are way beyond that now – but instead by an external consultant.

The result of those interviews should form the basis of a completely new strategy.

The entire current leadership should also resign. These people have no place in a responsible democratic system. (Oh, sorry, I forgot, this is the UN. Forget the part about a responsible democratic system!)

What is the clearest sign of failure, in the statistical metadata circuit?

Everyone scurrying from an old to a new abbreviation. The time for this has now come. The latest addition to the alphabet soup is “CSPA”.

What is CSPA? Here are its goals:

  • “facilitate the process of modernization
  • provide guidance for operating change within statistical organizations 
  • provide statisticians with flexible information systems to accomplish their mission and to respond to new challenges and opportunities
  • reduce costs of production through the reuse / sharing of solutions and services and the standardization of processes
  • provide guidance for building reliable and high quality services to be shared and reused in a distributed environment (within and across statistical organizations)
  • enable international collaboration initiatives for building common infrastructures and services
  • foster alignment with existing industry standards such as the Generic Statistical Business Process Model (GSBPM) and the Generic Statistical Information Model (GSIM), and
  • encourage interoperability of systems and processes”

This is a carbon copy of the mission statement of most of the key “statistical metadata” projects since the 1970:s, as already mentioned on this blog. It is clearly also a carbon copy of the two latest such projects, the now abandoned “METIS framework”, and the GSBPM/GSIM.

Is an “architecture” the right way to go?

No, the architecture needs a framework. And, then the same old modelling issues remain. That is still the core of the problem. It is the system, stupid! It is the truly standard model, stupid!

Such a system and such a model have existed for some ten years now, but the statistical metadata circuit could not care less. Instead, they are jumping from one acronym to another.

A deeper dive into the archives reveals the full story of the IMF critique of GSIM!

Something is terribly right with that critique, and something is terribly wrong. Let’s see what they had to say in their special report, which also is a recent METIS paper, from May 2013.

IMF used GSIM to evaluate one of their “processes”. They then accepted the propaganda at face value, but at the same time proceeded to test if the claims were true. Result?

“While we succeeded in identifying GSIM objects for each information object in our process, it left us wondering if 1) a less suited group would have succeeded, 2) the exercise was worth the time taken and 3) GSIM is scalable. We had negative impressions on all three counts.”

In other words, GSIM did not do what it claimed to do. This is terribly right, i.e. to test a model to see if it works. The thing that is terribly wrong, is that IMF had people involved in the GSIM project. Why did they not demand that the model be tested during the project and before publication?

Here is another observation, from the IMF report:

“GSIM provides a language for users of official statistics information objects. Those developing GSIM are lexicographers of its dictionary. We need to reflect the language of official statistics, and draw out consensus to form a common reference point for statisticians and related professionals.”

This is also both terribly right and terribly wrong. On the one hand, modelling should stick with simple, real and recognizable concepts and terminology. On the other hand, the very crux of the statistical metadata conundrum is that the type of objects needed in a model do not correspond to any real-life objects.

This is perhaps the single-most important reason that statistical metadata projects fail. Real-life terminology and understanding of objects are not sufficient to solve the core modelling issues. That is why regular systems experts are unable to solve these issues. The frequent use of obscure terminology is a reflection of this. There is a vague realisation that there is something terribly complex in the middle of all this, but there is no real understanding of what it is and how it can be solved.

The trick is to combine simplicity with the full complexity needed to achieve a truly generic model.

Meanwhile, IMF:s honest and methodologically sound trial of the latest product of the statistical metadata circuit, has confirmed both the extent of incompetence and charlatanism among the GSIM “experts” and responsible managers, and the bullying that seems to take place within the projects.

As mentioned above, the right thing to do is to assign a truth commission. Somebody needs to interrogate those responsible and demand an explanation to the recurring failures. All funding should be stopped until there are plausible explanations and a revised strategy.

Do you think that this will happen? No, of course not. Instead, in ten years time, the same “experts” will be doing the same thing yet again. Promises, failures, and wasting tax payer’s money.

Advertisements

4 thoughts on “GSIM

  1. I just love your honesty! Also, your blog posts provide hope for those,
    who, for some reason, happen to get familiar with the people in
    the international metadata circuit, and are also able to see what is
    truly going on there. They are not alone. I am not alone. You are not alone!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s