So, all the cards were on the table already in 1974. Both ideas about standardizing the production of statistics and the realisation that models, standards, systems, etc, would be required. In the ARKSY project “reuse” was the key concept.
Then what happened?
In 1991 Bo Sundgren and Bengt Rosén published a model for documentation for reuse of micro data (SCBDOK). Go, figure!
Should not that have been solved a long time ago, as a natural part of the ARKSY project? Sixteen years later there is suddenly something in writing about how to document micro data for reuse! What happened during all those years?
Well, let bygones be bygones. Better late than never. Right? Not really. Did they solve the problem? No, they did not. SCBDOK is a model in the abstract that describes what a survey is. It avoids all the tough issues of exactly what, how and how much information is needed to really reuse data. In fact, the concept of reuse is not even explored on a practical level. It is therefore fair to claim that SCBDOK is not really about documentation for reuse. It is much more like the GSBPM than an analysis of what reuse is and what documentation it requires in practice.
What we see is an example of a structural flaw in the way that Bo approaches a problem. He abstracts and theorizes but does not care about implementation and realities on the ground. Much like his OPR(t) model. In 1974 he believed that a general theory of databases was more interesting than Codd’s relational model; a model that he described as limited in its practical use.
We also have to repeat our question. Why did Rosén and Sundgren not publish their SCBDOK model in an academic journal?
Well, in the decade to come Bo would produce three more classics:
This seems to say and solve it all. Architecture! Systems! Data and metadata! Modelling! Right?
Not really. Look at the titles! “Guidelines”. Hm, “guidelines”? Who needs guidelines
Where are the systems, stupid!
Where are the models, stupid!
And why was not anything of this published in an academic journal?
There may be a partial answer to the last question. Remember the old adage, that besides Ovid he only quoted himself? I can recommend that you click on the links and then look through these documents for references.
Who do you think appears in the bulk of those references? Some 10 out of 16 are Sundgren himself. And the others? Well, they are also from the statistical metadata circuit. It even seems that several are from the same project or conference.
In these lists we also find a partial answer to what happened between 1974 and 1991. There are a few publications by Sundgren and Malmborg and Sundgren. 1980, 1984 and 1989. Also, we find an article published in an academic journal from 1985! However, not by Sundgren and in Statistics Sweden’s own journal (JOS).
Where are the articles in real academic journals, stupid!
We also find that the OPR(t) and alpha-beta-tau models have resurfaced. Why do these models appear already in 1974 and then remain in oblivion until the mid 1990s, when they are suddenly published by the UN? Without references? Surely, by then there was plenty of new research and theory about, for example, dimensional modelling that could be referenced?
Who has heard of a professor that publishes his results only via government institutions and conferences? What kind of a professor is that? What has he achieved? Why is he afraid of real peer reviewing? Why has he not published anything in academic journals? (There it is, again, that obnoxious question).
There is a bigger picture here. As before, we can give Bo the benefit of the doubt. Maybe statistical computing is very different in the NSO/ISO sector, and he has truly been a pioneer in this area. Maybe that is why he does not have any real articles or references?
However, this does not fly. Already in the 1974 ARKSY report he claimed that the project was of interest outside of the NSO community. It is also clear that he has been intellectualy lazy in referencing others and that he has avoided the necessary peer reviews of his OPR(t), etc, model.
The very unfortunate result of this is that it has become customary within the statistical metadata circuit to not build bridges and validate their work in the real world. You quote Sundgren and Sundgren quotes himself.
However, worse than that is what this has done to problem solving. If NSO/ISO requirements are very different, than exploring and understanding those differences should be at the heart of all statistical metadata concerns. Is it? No, they seem happy with their Neuchatel, and their DDI, and their SDMX, and their GSBPM without referencing, comparing or exploring what has been done in the private sector.
Bo Sundgren’s intellectual biography tells us something about how statistical metadata research developed into a closed shop.
It gets worse. There is in fact reason to suspect that Bo some time between 1974 and 1991 realized that either the goals were not possible to meet or he was not capable of finding a solution. So, he salvaged what he could, and created a closed shop for himself where he could take on the role of pioneer and guru, and no one would hold him accountable.
And it worked.
Today, the same thing is being done, but now on a level and a scale that no one could dream about in the 1990s. That includes the level and scale of charlatanism.
Footnote: The SCBDOK model as described in 1991 can be compared with the current GSBPM and especially the GSIM. It is still better than the latter. In the realm of the blind, the one eyed is king!