New Intelligent Biometric Security Program Can Adapt to Human Behavior
It has recently been announced by Homeland Security News Wire that researchers at the Biometric Technologies Laboratory at the University of Calgary have improved upon current commercially available biometric identification technologies to the point of creating a form of artificial intelligence capable of making decisions regarding biometric information received from a variety of different sources.
The new biometric security program works by simulating the “learning patterns and cognitive processes of the brain.” The system was developed by the research and application of “neural network-based models for information fusion.”
Professor Maria Gavrilova, the head of the lab that conducted much of the research for this project at the University of Calgary stated:
Our goal is to improve accuracy and as a result improve the recognition process. We looked at it not just as a mathematical algorithm, but as an intelligent decision making process and the way a person will make a decision.
This learning ability allows the system to combine information from more than one set of data measurements and subsequently to “combine features from multiple sources of information, prioritize them by identifying more important/prevalent features to learn and adapt the decision-making to changing conditions such as bad quality data samples, sensor errors or an absence of one of the biometrics.”
The various “data sets” of course, include information such as fingerprints, voice, gait, facial features, iris prints, etc. This new system not only categorizes, stores, and recognizes such data but it has the ability to make actual decisions about which piece of information is the most important or effective in a given situation, and also to adapt to different environments and incidents where the features are subject to change.
Yet, although the new program being announced by the University of Calgary represents a giant step forward for biometric security systems, the truth is that these programs are not coming in the future, they are already here. Take, for instance, the Bio-AI program created by the company M2SYS. Created under the guise of improving the biometric employee time and attendance systems currently using fingerprints, Bio-AI gathers more and more information about an individual’s fingerprint each time the employee’s finger is scanned.
However, much like Gavrilova’s program, Bio-AI does more than simply store basic data – it actually learns about the data and the environment in which it exists, building on its knowledge each time the subject is scanned.
Because the type of technology in the public view has been increasing insophistication at a rapid pace, it is relatively easy to understand how programs like Bio-AI and those being researched by Professor Gavrilova will eventually be implemented in a variety of applications within mainstream society. What makes the situation even more concerning, however, is the fact that the technology itself is becoming smarter.
This, in and of itself, is nothing to fear. Certainly, progress is nothing to oppose. However, the fact is that those who are presenting this technology to the world’s population and subsequently implementing it within the social framework clearly do not have the best interest of the world’s people at heart, nor do they care about increasing living standards through technological development. Instead, every piece of technology is used as one more block in the wall of the police state control grid.
Unfortunately, if the general public does not soon wake up to the prison bars being built all around them with devices and developments sold under the guise of convenience and security, then the debate over whether artificial intelligence is superior to the human brain may be settled long before it truly begins.