Twenty years ago I was facing really big challenge to propose information technology strategy of insurance company – the leader of insurance industry at Slovakia. One of things, that I have proposed was to develop strategy of forgetting information. The reason was and actually is ever the same. From many reasons, this was not done until today. Not only at that company. Most of companies, governmental agencies or non-profit bodies do not develop anything similar to strategy of forgetting.
You can not keep all the data forever. There are several limits given by many factors. The computers have limited time to get to the required results. You need not to get the answer immediately. There are applications allowing us to wait for the results at reasonable time. However, you would not wait on results 10000 years because of some natural limits given by average life expectancy. This idea leads to development of new techniques of computation at several machines at the same time, but paradoxically it did not lead to development of strategy of forgetting. Simply, people around mean, that it is ever easier to develop new techniques how to compute than to take his wits and clearly divide information on those, that we will hold for a long, but defined time, and the data, that might be lost very quickly. Very quickly – nevertheless what it means.
The idea of distributed computing does not solve the problem. It solves many problems and helps us to make the computations, that we would not be able to do without these techniques. But it is not solution of strategy of forgetting the information, that are already obsolete. Why? Simply, let us imagine, that we put together necessary amount of machines and using parallel processing we simply get the answers, we require. Thus there will be some crazy person asking for additional questions that are of importance to ask and requiring the answer. Et cetera, et cetera. Once early in the morning waking up you simply find the reality, that amount of sources available is not what you need, so you have to extend the amount of computers involved, and of course, you need to extend power consumption as well. So, the concept of distributed computing has limitations as well and in fact it does not solve the necessity of building strategy of forgetting.
There is another approach at the hardware level – empowering the processing possibilities of processors. The manufacturers of processors are doing the best possible, to bring ever more power to the computers. There is very large pressure of marketplace at this field. Let us thank to the market pressure that this is the trend, but again – let us remember that there are limits as well. For sure, the processors will be ever more powerful using ever bigger instruction set capable to compute ever bigger amounts of data at the same time – but with limits!
I was very surprised over the last two decades, that this simple idea – “strategy of forgetting” – has not become reality. In fact, I would expect from the consultancy agencies, the developers, and first of all the methodologists growing level of interest at this field. In fact this is slowly changing over last 2-3 years.
As well as twenty years ago, most of people naively believe, that “it will be done somehow later on”… But there are limits of this naive approach as well. Once morning we will be facing necessity to forget about all our former practices and simply quickly put together what to do.
I believe, that in next 10 years “strategy of forgetting” will be developed and will lead to large savings at the user side and will develop large business possibilities at the other side, that is not existing at the moment.
As I know, there are some isolated islands of enthusiasm. Some of them can be found using “strategy of forgetting” and “obsolete data” throughout Google. If you have additional information, please, do not hesitate to contact me throughout LinkedIn.