Friday, January 27, 2017

Accidental Engineering Inventions --- Micro-electromechanical Systems and Microwave Ovens




Serendipity in Engineering

Many inventions have come about accidentally.  This is the case in Engineering as well as other areas of research.  Many such famous cases of serendipity include the discovery of penicillin, sticky notes, play-doh, crazy-glue, and others.  Some engineering inventions have come about by serendipity as well.  Two will be discussed here:  smart dust or MEMS (Micro-electromechanical Systems) and Microwave ovens.


Smart Dust or MEMS (Micro-electromechanical Systems).

Smart Dust are small silicon based devices about 1 cubic millimeter in size.  Smart Dust, also called MEMS (Micro-electromechanical systems), function as sensors, robots or other micro-electromechanical devices that detect physical properties such as light, temperature, chemicals, magnetism, or vibration (Hsu,  Kahn,  and Pister, 1998).   Hence, the device is useful in analyzing physical, chemical, or biological systems.  I.e., the devices interface between the electronic world and the physical world.  The "electro" part is the electronic sensor and processing of the sensed data, and the mechanical part is the sensor's interface to the physical property.  They communicate the data about the physical attribute they are sensing wirelessly to a computer, where the sensed information is further analyzed (Kahn,  Katz,  and Pister, 1999).  Sensing is often done through radio frequency identification.  A smart dust system may be one cubic millimeter in size.  Berkeley is working on a one cubic millimeter MEMS that includes a programmable microprocessor, a bidirectional optical communications system, a sensor, a power supply, and analog to digital conversion (Pister,  Kahn,  and Boser, 1999).

Potential applications of smart dust systems include the following.  A smart dust camera is possible with a lens 120 millionths of a meter, which takes high resolution photographs.  This is due to the light sensing capability of some MEMS devices.  3D printing of a circuit for such a small camera is possible.  The lenses are composed of strands of optical fiber.  Such small cameras could do medical imaging inside the body and even in the brain, not to mention the use as spy cameras or for use as security cameras. MEMS that sense chemicals can be used to detect tumors or biological agents.  

The Accidental Discovery

The systems were originally discovered by Jamie Link, an electronics graduate student at U.C. Berkeley in an accident.  Jamie accidentally destroyed a silicon chip she was working on, and upon further investigation noticed that some of the resulting pieces still functioned as sensors.  Berkeley and the Department of Defense (DARPA) worked on the devices to develop them into usable products.  

Forces  Driving the MEMS Technology

There are several drivers of this technology.  First of all, the medical community is in dire need of ways to do medical imaging within the body that are non-invasive and non-destructive.  Applications such as tumor detection, detection of plaque build-up in the arteries, brain imaging for cancers and other neural damage are all critical.  The micro-sensors could have applications in meteorology with the ability to sense differences in pressure and temperature and light.  Various defense related applications are driving the technology, such as battlefield sensing, particularly the ability to sense chemical or biological agents, hence the interest of DARPA and DOD in this technology.  Another application driving MEMS is package tracking in inventory control and shipping.  These small communications devices could be embedded in packages and could automatically communicate with the internet about their whereabouts without any scanning involved.  Another commercial use would be product quality monitoring, including monitoring the physical attributes of food (to insure freshness and edibility) or electronics products (to insure they were not exposed to destructive temperatures or moisture).  Medical devices  could be practically invisible, for example a micro-hearing aid.  

Microwave Ovens

Microwaves and Microwave Technology.

A microwave is an electromagnetic wave within the frequency of 10^8 cycles per second to 10^12 cycles per second (Hz).  A cycle refers to the rise and fall of one sine wave.  I.e., every form of electromagnetic energy has a frequency associated with it measured in Hertz or cycles per second.  This rise of energy and fall of energy is cyclical and follows the form of a sine wave.  The crest of one sine wave to the next is one cycle.  Microwave radiation is invisible.  It is somewhat slower than infrared light which is also invisible.  Visible electromagnetic radiation (colors) range from 3.8 * 10^14 to 7.5 * 10^14 cycles. Radio waves are even slower than microwaves at 10^4 cycles per second to 10^8 cycles per second.  We all know that radios broadcast audible signals.  Radar, which Percy Spencer was working on, normally uses radio waves.  However, some radar systems use higher frequency waves like microwaves.  There is a whole engineering science behind microwave communications.  Microwave communications systems usually work using electromagnetic waves (carrier waves) between 1 GigaHertz (1 * 10^9 cycles per second) and 300 GigaHertz (1 * 10^11 cycles per second).  Microwave communications now carry wireless computer data, television signals, and telephone signals over medium to long distances through point to point connections or satellites.  It is in investigating microwave radar that Percy Spencer noticed that the microwave systems generated heat.

The Accidental Discovery

In 1945, the self taught engineer, and many thought engineering genius, Percy Spencer, was working on microwave radar systems for Raytheon.  While operating one of the microwave communications systems, Percy Spencer noticed that his pants were getting hot.  Furthermore, he noticed that a chocolate bar that he had in his pocket melted.  Quick to realize the potential of this side effect of microwave systems, he convinced Raytheon to get a patent on a microwave oven.  Although this invention was patented in 1945, it was not until 1967 that the real microwave oven revolution took off.

Microwave Ovens and Microwave Oven Technology

Microwave ovens warm food by passing microwaves at the frequency of 2.4 gigahertz (2.4 billion cycles per second) in residential kitchens.  In industrial installations a 915 megahertz (915 million cycles per second) microwave is typically used.  These frequencies are different than microwave communication devices use so as not to interfere with those signals.  Food heated in a microwave oven uses dielectric heating, as the heated microwaves move through the food and excite its molecules.  Food in a microwave heats from the inside out versus traditional cooking which heats from the outside in. A consumer microwave uses 1100 watts of electricity to create 700 watts of microwave energy, the rest being dissipated as heat (Risman, 2009).  A microwave uses a Faraday cage to prevent microwaves from coming out of the oven.  Microwaves include various controls to control the amount of microwave energy produced and hence the cooking time.

Forces Driving the Microwave Oven Revolution

Home Cooking

Amana produced a portable microwave oven that could go in everyone's kitchen in 1967.  The systems in 1955 were too big to be marketable.  The invention and patent of a microwave oven in 1945 by Raytheon long preceded the microwave oven revolution that occurred in the 60's.  The microwave oven technology had to be miniaturized and productized.  

In the 1950's when television was all of the rage, TV dinners became popular.  But these had to be cooked in conventional ovens.  The desire for a quick hot meal or snack, i.e., microwave meals or snacks, was already there.  This was one of the main drivers for microwave ovens.  

In 1967, when residential microwave ovens became possible, there was a virtual overnight revolution.  Everyone was buying a microwave oven.  The convenience could not be matched.  Soon food products further drove microwave oven sales:  frozen microwave ready to eat meals and snacks.

Restaurant Cooking

Microwaves are not used extensively in commercial cooking, as cooks discourage the use of microwaves in favor of other heat sources such as a gas flame.  However almost every commercial kitchen does have a microwave or two.

Industrial Applications

There are various industrial uses for microwave ovens.

Cooking and Heating in Space.  

Microwave ovens are used extensively in space.  You cannot create a flame in zero oxygen, and even in the oxygen rich environment of a space capsule or station, you wouldn't want to have a flame.  Hence, heating is typically done by microwave.  

Conclusion

Two examples of serendipity of engineering inventions have been discussed here:  smart dust or MEMS and Microwave Ovens.  The accidental discovery of these two inventions was covered as well as the technicalities involved in both types of products.

References


Hsu, V.,  Kahn, J. M., and Pister, J. (1998).   "Wireless Communications for Smart Dust",             Electronics Research Laboratory Technical Memorandum Number M98/2, February.

Kahn, J.M., Katz, R.,  and Pister, J. (1999).   "Mobile Networking for Smart Dust", ACM/IEEE   Intl. Conf. on Mobile Computing and Networking (MobiCom 99), Seattle, WA, August.

Pister, J.,  Kahn, J.M., and Boser, B.E. (1999).  "Smart Dust: Wireless Networks of Millimeter-     Scale Sensor Nodes", Highlight Article in 1999 Electronics Research Laboratory Research Summary.

Risman, P. (2009). "Advanced topics in microwave heating uniformity", pp. 76-77, in, M W         Lorence, P S Pesheck (eds), Development of Packaging and Products for Use in Microwave Ovens, Elsevier, 2009

Tuesday, January 24, 2017

Technological Think Tanks



Think Tanks for Innovation

A think tank is not a new idea and there are many think tanks that exist to find solutions to geopolitical problems and public policy problems, such as the Hoover Institute, but think tanks can be used for and are very appropriate for brainstorming about technological problems, solutions, and futures.  

Some rules for think tanks are the following.  It may be wise to have a diversity of participants to yield the most diverse ideas.  Different viewpoints, experiences, and values are encouraged.  This is clearly important in geopolitical think tanks, but even in technological think tanks, where mostly experts are utilized, a diversity of experience and perspectives is valuable.  Transparency of funding may be important to show that the think tank is independent and not tied to a specific group or interest (McGann, 2015).  This is especially true for geopolitical think tanks, but also for business think tanks that are not tied to a certain company and to technological think tanks that are industry specific but not company specific.  Having standard policies and procedures for think tank activities may be important to keep the output consistent.  Collaboration should be encouraged.  Often the best think tanks include a mix of people with industry experience and academicians.  This is especially true for technological think tanks where some of the best research may occur both in industry and at universities.  In modern think tanks there should be both personal contact and the utilization of digital technology for communication.  Finally think tanks should be somewhat lean, i.e., not big groups but smaller effective groups with qualified participants.  This is especially true of technological think tanks where the members may largely be experts related to the technology.

Different Think Tank Methods

One Roof.  The One Roof Method involves gathering a set of thinkers in a common place.  Through the process of interactive discussion a concept or problem may be expanded and moved toward a solution.  The one roof may be literal (i.e., a set of thinkers in a room) or electronic, as a set of thinkers interacting through a collaborative audio-video link.  Sitting in a circle may be an effective method, where those proposing and considering proposed ideas can directly look at each other and confront each other.  This think tank method may include a facilitator for the confrontational discussion.

Forced Connections.  Thinkers think about two disparate things and then look for links between them.  In identifying the links, many new words and concepts emerge.  This is the language of the concepts that link the two original concepts.  The thinkers than think through the problem they are addressing (the problem identified by the two original concepts) by using the new link concepts.  Some drawbacks may be, if everyone starts with the same two initial concepts, there may be a limit to out of the box thinking.  

Mind Mapping.  Mind mapping is a graphical technique to perform think tank work.  The thinkers take a main idea (a problem, or concept) and determine its subtopics which radiate as branches from the main topic.  The branches continue to branch out into more subtopics and form a topology like a tree (in the computer science sense of an inverted tree with the root at the top) or a graph (i.e., the computer science concept of a graph which is a set of nodes or vertices interconnected by a set of links or edges).  The sub-topics help ground the idea and connect it to specific possible practical ideas that help the innovation process.  Mind mapping is particularly good when confronted with a complex idea or difficult problem.  Breaking the problem or idea down into its components and then linking the relationships between those components can move on towards a solution.  A mind map could be produced as a UML (Unified Model Language) diagram.

References

McGann, T. (2015).  The 2015 Global Think Tank Innovations Summit Report.  Located at:  http://repository.upenn.edu/cgi/viewcontent.cgi?article=1015&context=ttcsp_summitreports

What is Mind Mapping at:  https://litemind.com/what-is-mind-mapping/

Monday, January 23, 2017

Molecular Computers



Unit 1 Discussion 2
Eric W. Wasiolek
Futuring and Innovation
Dr. Rhonda Johnson

Describe an innovation idea that is not possible today but will be available in the next 15 to 20 years.

Molecular Computers.

We are nearing the end of Moore's law with the current chip technology.  Moore's law states that the number of transistors in an integrated circuit (chip) per square inch will double every year.  This has held true since 1965, and has become part of the basis for the continued miniaturization of digital devices.  However, Moore's law is about to run into physical limitations of the current photolithography process of putting (printing) ever more transistors per square inch on the current chemical surfaces (silicon, gallium arsenide) that limit miniaturization.  Fundamentally new technologies at the molecular and atomic level will lead to the ultimate miniaturization.  I.e., you can't build a transistor that is any smaller than a set of atoms unless you attempt to build a computer at the quantum level.  

I am a subscriber to scientific American, and have read some articles in that magazine on molecular, bio-molecular, and quantum computing.  These technologies are clearly very much in their infancy and wouldn't see any maturation for 15 to 20 years.  Nano-computers, as come call them (nano being at the billionth of a meter size) are done at the molecular level.  

I would like to state a little more strongly how different molecular computers are from traditional computers.
A traditional computer is just a voltage processor.  A one is represented as a high voltage and a zero as a low voltage.  In a logic gate, for example an AND gate, if two high voltages come in then a high voltage comes out, otherwise a low voltage comes out.  In an OR gate is two low voltages go in a low voltage goes out, otherwise a  high voltage goes out.

In a molecular computer there are no voltages.  It is not a voltage processor.  There are no lines and gates etched on a chemical surface to create a circuit.  There are only molecules or an assemblage of molecules.  A one is represented as one state of the molecule (or configuration), a zero is represented as another state of the molecule.  This is not an electronic machine, it is a chemical machine.  Yes there is some electronics involved in chemicals, as it is through electron sharing that atoms combine.  The next generation of computers will not be built by computer scientists or electrical engineers, it will be built by chemists and physicists. Computer science will still obtain as the LOGIC of the molecules will be like the logic gates of a traditional computer and the ones and zeros that make up the computer instructions and data will at an abstract level be the same.  So programming will be similar.  At the physical leve the two computers are completely different.  Not only is this a breakthrough technology, it is so completely different that breakthrough is not strong enough a term.


What is not possible today and what doesn't exist today is a molecular computer.  By computer is meant an entire working computer that can process information and perform tasks.  This requires a molecular CPU and large molecular memory both of which don't exist yet.  The current research has created some basic logic gates out of molecular components.  These logic gates need to be connected into circuits, and these circuits need to form basic CPU components like a sum-er, complementer, etc.. to be able create CPU components like an arithmetic-logical unit (ALU).  

Identify and discuss two of the forces that define it and that may facilitate or reduce its likelihood of success.

A couple of forces that will define the molecular computer technology and facilitate or reduce its likelihood of success are the setting up of labs to work on molecular computers at universities, and funding of labs at universities and in industry to continue and improve the work.

The work in universities would be done primarily in physics and chemistry departments and their associated labs.  Although this may appear to be computer science or electrical engineering, the ability to hold bits in memory with a molecule, and to process those bits (which are states of the molecule) through molecular logic gates involves primarily physical and chemical research assuming a basic understanding of the logic involved in creating gates, logic components, ALUs (logic units) and entire computers that comes from electrical engineering and computer science.  So to some extent this may involve a cross pollination of disciplines (chemistry, physics, computer science, and electrical engineering), but the work would primarily be done in physics and chemistry assuming knowledge of how computer hardware is built.

Funding of this research in industry would also be helpful.  This could be done in research units of companies or in incubator labs funded by some startup source.  Any practical application of this technology awaits much more development and wouldn't be available for some 20 years.

Government programs to pursue this research at national labs would be helpful as well.  This would require a funding source which might need to come from a cabinet department or from congress.

Friday, January 20, 2017

Technological Forecasting and Group Decision Methods



This particular blog is more managerial than technical.  It discusses group decision making techniques which may be applied to technical decisions as well as forecasting technological futures
.
The Delphi Group Decision Making Method

In the Delphi Group Decision Making Method a panel of experts on a topic are assembled and answer questions from a questionnaire in two or more rounds.  There is a facilitator for the group.  The facilitator takes the input from the experts each round and provides an anonymous summary of their technological strategies.  The experts review the summary and revise their strategies each round.  The successive inputs and revisions allow the group to converge towards a single strategy or similar strategy.  The experts may come from inside or outside the organization, but they are all expected to be knowledgeable in their field and position and on the topic of the group decision.  The process remains anonymous and even who made or had the most influence on the final decision is kept anonymous.  The facilitator sends out the questionnaires and collects the responses and summarizes the responses.

The Delphi method is often used in technological forecasting.  The experts give their opinions on a technology and on when the technology is expected to mature and be on the market.  The group essentially is attempting to forecast the technological future.  This group decision making method is used for other types of forecasting and decision making as well.


Two Other Group Decision Making Methods

Nominal Group Technique

In the nominal group technique, members present their solution to a problem.  Each solution is presented with a short explanation.  Duplicate solutions are eliminated from the set of possible solutions.  The solutions are then ranked by the group.  The solution with the highest rank is the decision.  Ranking is often done but not always done.  An evaluation or the best solutions may be done more subjectively.  Finally a vote is made on the solutions and the solution with the greatest number of votes wins.

Many and diverse inputs and opinions are encourage so the initial set of solutions is heterogeneous.  Participation of all members and a plurality of ideas is encouraged.  In technical decisions the members may be different members of an engineering department and may possibly include members from other departments for an outside perspective.  There is a facilitator.  The facilitator explains the procedures in a nominal group technique.  Ideas are first generated on paper, silently, not to discourage non-vocal members.  The ideas are then shared.  The group discusses the various proposals.  Finally the ideas are ranked and voted on.

SWOT Analysis

SWOT analysis is often used to identify the  strengths, weaknesses, opportunities, or threats surrounding a business.  It can also be used to identify the strengths, weaknesses, opportunities, and threats around a proposed technology or future product.  The group may look at various proposed solutions according to a SWOT analysis.  There may be strengths (for example features that the proposed product or technology will have that the competition doesn't), weaknesses (for example features that the competition has that the proposed product or technology do or will not have), opportunities (for example the ability to release a new product or technology or feature that is needed by the market for which no other company has a solution), or threats (for example technologies or products in the market that may make the proposed solution obsolete).  The group discusses the SWOT of various solutions, then narrows down the possible solutions based upon which have the best SWOT analysis, and finally select the solution that is most likely to succeed based upon the SWOT analysis (for example a technology or product that has clear strengths, few weaknesses, represents a great market opportunity, and doesn't seem to have serious threats on the horizon).

Compare and Contrast the Similarities and Differences Between Methods

Delphi and the Nominal Group Technique

One notable difference between the Delphi approach and the Nominal Group Technique, is that the Delphi approach is highly anonymous, whereas the Nominal Group Technique (NGT) involves and open discussion of various proposals and open ranking and voting of the proposals.  The NGT approach has the potential of embarrassing participants.  This is avoided in the Delphi anonymous approach.  In the Delphi approach it is not even known whose solution was accepted.

Delphi, NGT, and SWOT.

The SWOT discussion is open and is not anonymous as is the Delphi approach.  This could have the same repercussions as there are on the NGT approach.  However, SWOT is highly structured, and not a random presentation of different solutions.  The solutions are particularly evaluated around SWOT, which has been shown to be a very effective business analysis technique and hence now a classic technique.