Sunday, December 15, 2013

Scientific Computing: Scientific Research as a Precision Tool

The creation of a complex system that utilizes real-world data to solve problems often requires a high degree of precision. This level of precision is often not possible without the use of scientific research. As someone with experience working with artificial intelligence and assembly-line type automation, I can confirm that prior scientific research is important for analytical systems in a factory setting (especially those working with organic objects).

For example, if a food company wants to design a system for checking the quality of potato chips, data would need to be collected on the chips themselves before a solution could be developed. The first step would be to define what quality of product is acceptable for shipment. Next, what are the properties of such a chip? What is its shape, size, color, pH level, trans fat content, etc.? After the acceptable quality is determined, it is necessary to find the properties of what is not an acceptable quality. These quality levels can be determined by scientific tests and data analysis to determine what properties present in the product make it tasty, unhealthy, visually appetizing, etc. Then, using this data, thresholds can be created that define exactly what makes a "good" chip and what makes a "bad" one. From there, it is possible to create algorithms that analyze the product using a variety of sensors and dispose of defective chips.

http://mentalfloss.com/sites/default/files/styles/article_640x430/public/green-chip_5.jpg
Green potato chips are sometimes considered undesirable by consumers
During my internship, I successfully assisted my mentor in the application of using scientific research and analysis to flower bulbs (see previous post on A.I.). In organic materials, properties of objects vary greatly depending on the genetic makeup of the object. It is very important to properly prepare for this diversity when utilizing computers to solve problems!

Sunday, December 8, 2013

Computer Graphics: How a Teapot "Shaped" an Industry

In the 1970s, 3D computer generated imagery was a new concept being actively researched. At that time, very few computers had enough power to generate 3D images, and even if they did, special hardware (e.g. a graphics terminal) was required to view the resulted rendering. During this time, the University of Utah was a big name in computer graphics research.One of the university's most well known researchers in the area of computer graphics was Martin Newell. In 1975, he needed a recognizable shape that could be used as a benchmark to test the 3D Rendering system that he was developing. One day, Newell was drinking tea with his wife and noticed that the teapot's shape had a very diverse range of mathematical properties, making it the perfect object to model his benchmark after.
Martin Newell's Teapot Sketch.

File:Utah teapot simple 2.png
Modern Rendering


Newell then sketched the teapot onto graph paper and put the coordinates into a computer. The result was an endearing image of a teapot that left a big impact on computer graphics. For decades after, the Utah Teapot <continued to be used as a benchmark> for graphics capabilities These days, rendering the teapot in films and other works is an inside joke among graphics artists. The original teapot is an exhibit at the Computer History Museum in Mountain View. I was privileged enough to see it there recently!
The teapot makes a cameo in Toy Story
The Utah Teapot on display

Sunday, December 1, 2013

Communications and Security: Are Your Tools Working Against You?

Thanks to Edward Snowden, Julian Assange, the NSA and others, information security has become a critical concern for many people across the globe. Computer users everywhere are concerned that they may be under the surveillance of some kind of government entity. Many experts say that encrypting data and using strong passwords can help a lot, but is it enough? Today, I'd like to talk about the frightening possibility that protection against surveillance may be in a place that is unreachable to most people: compilers.

In 1987, Ken Thompson, the co-creator of Unix, gave a very interesting lecture information security as part of his Turing Award acceptance speech. The speech, titled Reflections on Trusting Trust gave an extremely alarming insight on just how difficult it is to keep data secure. In this talk, Thompson described how he was able to modify the source code of his C compiler to “deliberately mis-compile source whenever a particular pattern is matched”. The pattern that he used as a proof of concept was the login command for Unix. This command is absolutely critical to any Unix system, as it protects users' private data from other users via a password. Thompson was able to generate a C compiler that compiled login in special way, giving the command a bug that would allow anyone super user privileges if they were to type a very specific password that he came up with. This C compiler would compile any other code normally. Thompson went on to discuss the all too possible situation that a person with enough knowledge and experience could plant exploits like this into assembly and machine code, making it almost undetectable. “You can't trust code that you did not totally create yourself.”, Thompson concluded.
[figure 7]
Proof of concept code by Thompson
Ken Thompson

















With this horrifying revelation, it is very clear to that it is not possible to be completely secure. We can make our passwords as strong as possible, but are those passwords effective if we can't even trust our own tools? I will still do my best in regards to conventional security practices, but I acknowledge that they cannot protect me against a skilled enough attacker.

I leave you with a quote:

"The only system which is truly secure is one which is switched off and unplugged locked in a titanium lined safe, buried in a concrete bunker, and is surrounded by nerve gas and very highly paid armed guards. Even then, I wouldn't stake my life on it."
-- Gene Spafford, Director, Computer Operations, Audit, and Security Technology (COAST) Project, Purdue University