Effective human-computer interaction through cognitive biometrics

Abstract
Use of screen design, and a system, is affected by many factors. These include: how much information is presented on a screen, how a screen is organized, the language used on the screen, the distinctiveness of the screen’s components. In this paper I have presented some principles of the screen design and making human computer interaction more effective through cognitive biometric and neural technology.
1. Introduction
Human computer interaction has been playing a very crucial role in industries and common man’s life as well. It basically deals with how a human being interacts and controls his system; it also deals with methods and techniques required for making human computer interaction more comfortable and effective which in turn give the desired results. human and computer interact through only one way and that is an interface or a screen, so to make interaction effective a screen should be designed in such a way that user can comfortably complete his task and leave. A well designed screen will reflect the capabilities, needs, and tasks of its users. It is developed within the physical constraints imposed by the hardware on which it is displayed, it effectively utilizes the capabilities of its controlling software, it achieves all the business objectives of the system for which it is designed. To accomplish all these goals the designer must first understand the principles of a screen design. It begins with a detailed series of guidelines dealing with user considerations, including the test for a good design, organizing screen elements, screen navigation and flow, visually pleasing composition, typography, and reading, browsing, searching on the web.

2. Problem
Though human computer interaction has reached a level where a user can directly touch the screen and manipulate things as he wishes, it still demands use of hands i.e. motor work, which takes of course negligible amount of energy, but what if even that negligible amount is not wasted. The main problem of current standards is that they include use of hands to operate or to get the job done by any device (computer) or motor work which is not desired.
3. My idea
As it is known that present human computer interaction techniques, demand motor work, the question arises here is how to reduce that, or how to make interaction comfortable so that there should not be any use of hands or motor work. The idea in this paper is an answer to this question. The idea is: By using cognitive biometric techniques a user can interact with the computer through his thoughts. There will be a brain-machine interface where the interaction will take place. The interface is just like a sensor that will detect the human thought, and creates a thought pattern in the system and the system will act according to that pattern. Now how this can be used to enhance interaction, its simple, using human computer interaction screen designing techniques, a good screen is designed and using cognitive biometric techniques an user can interact with the system so that the desired job can be done in less time and in more effective way. To understand how to design good screen, some of the basic principles of screen design must be known.
4. Screen design goals
To make an interface easy and pleasant to use, the design goals are: reduce visual work, reduce intellectual work, reduce memory work, reduce motor work, minimize or eliminate any burdens imposed by the technology. The result of these goals will always be improved user productivity and increased satisfaction.
5. Organizing screen elements clearly and meaningfully
Visual clarity is achieved when the screen elements are organized in a meaningful and understandable ways. A clear and clean organization makes it easier to recognize screen elements and ignore the unnecessary information. The visual clarity is depended on multiple factors which include consistency in design, a visually pleasing composition, a logical and sequential ordering, and the presentation of the proper amount of information, grouping and alignment of the screen items. What must be avoided is visual clutter created by indistinct elements, random placement and confusing patterns.
6. Ordering of the screen data and content
It includes division of information into units that are
Logical, meaningful, and sensible. Organizing them by the degree of the interrelationship between the data or the information. Providing an ordering of screen units of information and elements that is prioritized according to user’s expectations and needs. Forming groups that cover all possibilities, ensuring that information that must be compared is visible at the same time. Ensuring that only information relative to the users tasks or needs is presented on the screen.
7. Screen navigation and flow
Screen navigation should be obvious and easy to accomplish. Navigation can be made obvious by aligning screen control elements, and judiciously using line borders to guide the eye. Using various display techniques focus attention on the most important parts of the screen.

8. Visually pleasing composition
Eyeball fixation indicates that during initial scanning of the display in a clockwise direction, people are influenced by the symmetrical balance and weight of the titles, graphics, and text of the display. The human perceptual mechanism seeks order and meaning, trying to impose
structure when confronted with uncertainty. Meaningfulness and evident form are significantly enhanced by a display that is visually pleasing to one’s eye.visually pleasing composition draws attention subliminally, conveying a positive message clearly and quickly.
9. Amount of information
Present the proper amount of information for the task, too little is inefficient and too much is confusing. Present all information necessary for performing an action or making a decision on one screen, whenever possible. Restrict the window or screen density levels to no more than about 30 years
10. Web page size
The size of the page should be minimum, restrict to two or three screens of information. Place the important or critical information at the very top so it is always viewable when the page is opened, locating it in the top 4 inches of page.
11. Scrolling and paging
Scrolling should be avoided to determine a page’s contents. Minimize vertical page scrolling, when vertical scrolling is necessary to view the entire page, provide contextual cues within the page that it must be scrolled to view its entire contents. Provide a unique and consistent end of the page structure.
12. Distinctiveness
Individual screen controls, and groups of screen controls must be perceptually distinct. Screen controls should not touch window borders, should not touch each other. Buttons should not touch each other, should not touch a window border. A button label should not touch the button border. Adjacent screen elements must be displayed in colors or shades of sufficient contrast with one another. Elements of the screen must be distinct, clearly distinguished from one another. Distinctiveness can be enhanced through separation and contrast.
13. Focus and emphasis
Visually emphasize the most prominent element and most important elements, or central idea or focal point. To provide emphasis use techniques such as higher brightness, larger and distinctive font, underlining, blinking, contrasting colors, larger size, positioning, white space and etc. to ensure that emphasized elements stand out, avoid emphasizing too many screen elements, using too many emphasizing techniques, screen clutter.
14. Conveying depth of levels or a Three dimensional appearance
Use perspective, highlighting, shading, and other techniques to achieve a three dimensional appearance. Always assume that a light source is in the upper left corner of the screen. Display command buttons above the screen plane, display screen based controls on, or etched or lowered below, the screen plane.
15. Presenting information simply and meaningfully
Provide legibility so that information is noticeable and distinguishable. Provide readability so that information is identifiable, interpretable, and attractive. Present information in an usable form, utilize contrasting display features to attract and call attention to different screen elements. Create visual lines implicitly and explicitly to guide eye. Be consistent in appearance and procedural usage.
16. Organization and structure guidelines
There are series of organizational and structural guidelines for specific kinds of screens. They are: information entry and conversational entry from a dedicated source document and display/read only screens.
16.1 information entry and modification screens
Organization of these categories of screens should be logical and clear, these screens should have most frequently used information on the earliest screens, at the top of screens, and the required information should be on the earliest screen and at the top of the screens. All the captions should be meaningful and consistently positioned in the relation to the data field controls, they should be left or right aligned, mixed case using headline styles. Text box and selection controls should designate the boxes, spacing and groupings should be created in a logical way making them medium size, about 5-7 lines.
16.2. Dedicated source document
Occasionally, it may be necessary to enter details into the screen by looking at the document in hand, for example an application form, a bank loan form or a reservation form. The main aim behind designing screen for such purpose is to make user complete the details or fill in the details without even looking at the screen, the screen must be designed according to the document. It includes organization of the image of the associated document, captions should consists of abbreviations and contractions, they should be consistently positioned in the relation to the data field , and they should be right aligned, text boxes should be designated by boxes and spacing and grouping should be done in the same way as in the document, headings should be included if they are present on the source document, control arrangement should be in the same way as in the source document and should be in left to right completion, keying should be done in manual tabbing.
16.3 Display/Read only screens
Display/read only screens are used to display results of the inquiry or a request or to display computer request. The main objective behind this is to minimize the human eye movement to optimize the human scanning, a consistent viewing pattern is established, now follows some of the guidelines for designing such screens, organization of these kind of screens should be logical and clear and the data should be limited to what is needed and necessary, most frequently used data is displayed on the earliest screens and on the top of the screens, captions should be meaningful and clear, they should be consistently positioned in accordance with the data field, they should be left or right aligned, text boxes should not have surrounding border or box, spacing and grouping should be logical, headings should be in uppercase or headline-style mixed case, set off from related controls. Data should be visually emphasized and should be given a meaningful structure. Data should be arranged into columns, organizing for top-to-bottom scanning.
17. Reading, browsing, and searching on the web
Usually a web page is scanned in a clockwise direction, people being influenced by the graphics, colors, headings and text. The page will be seen by large masses for its shape and structure. Studies of web users indicate that the users immediate attention is directed to page’s content, ignoring all the other peripherals like graphics, navigation areas, logos, slogans, advertising, or anything else considered fluff.
17.1. Scanning guidelines
Organization should be in such a way that eye movement should be minimized. Provide groupings of information, organize content in a logical and obvious way. While writing provide a meaningful title , provide meaningful headings and sub headings, concisely write the text, write short paragraphs containing only one idea, use bulleted and numbered lists, array information in tables, provide concise summaries. Highlight key information carrying words or phrases, important concepts.
17.2. Browsing guidelines
To browse facilitate scanning, provide multiple layers of structure, make navigation easy, respect the user’s desire to leave, upon returning, help the users reorient themselves.
17.3. Searching guidelines
To help user to search his desired information, identify the level of expertise of the user, anticipate, the nature of every possible query. The kind of information desired, the type of information being searched, how much information will result from the search, plan for the user’s switching purposes during the search process.
Interaction through brain-machine interface
The power of modern computers is growing alongside our understanding of the human brain. It is difficult to imagine controlling a system through your thought. That says, you think to move a cursor from one corner of the screen to other, and you’ll in the next second watch it really happening. Now, how this can be done. The answer is using a brain-machine (computer) interface a user, whatever he thinks will be passed to the computer as a signal and then in the system, the signal is processed by computer program or software. This software will make computer work according to the thought. Just consider controlling or manipulating a computer or a machinery just with the thought.
Brain-machine interface
The complete interaction between human and computer is based on this brain-machine interface; brain-machine interface can be designed only because of the way our brain functions. Our brain is filled with neurons, a nerve cell that is connected to other individual cells by dendrites and axons. Every time when we think, move or do some activity, our neurons will start working. This work is done by passing electric signal from one neuron to other. The signals are generated because of difference in electric potential of the ions present on the membrane of the neuron. These signals are actually transferred through a fatty substance that surrounds the axis of the cylinder of the nerve fibers called as “myelin”, some of the signals escape, these escaped signals can be caught and analyzed and interpreted to know what they mean. Then they can be used to command the machine to do the job.
How input is given to the system and how output is displayed
The input from the brain and given to the system is taken with the help of a electrode, it is called as “electroencephalograph” and it is attached to the scalp. This electrode detects the signals. However skull will block some of the signals and will not be clear, so to get the high resolution signal clarity, the electrode can be implanted into the gray matter or on the surface of the brain. This electrode will take the signal and pass it to the computer and the software in the computer will process that signal and display what is desired. The electrode will measure the minute difference in the voltage of the neurons and then the signal is amplified and filtered by some computer program.
My idea to improve the interface
The EEG is the device that is used to detect the brain signal and pass it to the computer program. To do this, an electrode should be implanted into the gray matter, which requires invasive surgery, and this is restricted to some people who are physically not capable. But, what for those who are normal physically. They just can’t allow implanting some electrode into their brain just to interact with the computer comfortably. The interface should be very easy to use,, something like a cap, or a hat; when a user wants to interact with the computer, he wears the hat or interface and then starts interacting immediately. This will attract many users and will take over all technologies in future. Thus, makes an interaction easy and convenient job unlike moving mouse from one corner of the screen to other, exercising your hand.
Advantages
The first advantage for designing such an interface is, it will provide a convenient way of interaction to those who are physically challenged, especially hands, and who has a good knowledge about computers.
Second advantage of introducing such an interface is, it will reduce the work time required for getting a task done using mouse, because everything will be done the moment you think.
It will reduce stress, use of memory for remembrance, and frustration that is caused due to slow response from the hardware devices.

Conclusion
Human-computer interaction has been playing very important role in the computer industries and in the life of a common man as well. Effective interaction has been of so much importance that it can cause a huge loss for a company, if it fails to make its user or customer comfortable at his site. Not only in the web world but also interaction has played a prominent role when dealing with software’s at home and in industries, ineffective design can cause loss of cost and job. For a company to earn profits, specifically business and software companies, it is very important and necessary for those companies to design their software’s and web sites effectively. To design a software or a web site, some of the screen designing principles are to be used. In addition to that, a designer should keep in mind that his customer should feel complete comfort when dealing with his work. This paper has presented a way in which human-computer interaction can be made extraordinarily effective and comfortable by introducing or combining cognitive bio metrics and neural technology with human computer interaction subject. This combination results in a great improvement in the interaction world. Interaction through brain-computer interface is a easy and comfortable way than any other way to interact with the system. It completely deals with thoughts. So it can be concluded with a statement that clearly explains the main aim of combining human computer interaction with cognitive biometrics is “What you see, is the way you think”

References
[1]. The essential guide for user interaction design by Wilbert O. Galitz.
[2]. http:\www.Howstuffworks.com\
[3]. Biometrics, Wikipedia.org
[4]. Human-computer interaction, http\Wikipedia.org

Advertisements

Applications of Neural Networks

Download Link : Appliations

Chapter 1. Introduction

  1. Introduction to neural networks

1.1 What is a Neural Network?

An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well.

1.2 Why use neural networks?

Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an “expert” in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer “what if” questions.


Other advantages include:

Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.

Self-Organization: An ANN can create its own organization or representation of the information it receives during learning time.

Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.

Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage.

1.3 Neural networks versus conventional computers

Neural networks take a different approach to problem solving than that of conventional computers. Conventional computers use an algorithmic approach i.e. the computer follows a set of instructions in order to solve a problem. Unless the specific steps that the computer needs to follow are known the computer cannot solve the problem. That restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve. But computers would be so much more useful if they could do things that we don’t exactly know how to do.

Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable.

On the other hand, conventional computers use a cognitive approach to problem solving; the way the problem is to solved must be known and stated in small unambiguous instructions. These instructions are then converted to a high level language program and then into machine code that the computer can understand. These machines are totally predictable; if anything goes wrong is due to a software or hardware fault.

Neural networks and conventional algorithmic computers are not in competition but complement each other. There are tasks are more suited to an algorithmic approach like arithmetic operations and tasks that are more suited to neural networks. Even more, a large number of tasks, require systems that use a combination of the two approaches (normally a conventional computer is used to supervise the neural network) in order to perform at maximum efficiency.

Neural networks do not perform miracles. But if used sensibly they can produce some amazing results.

 

2.1 Co Evolution of Neural Networks for Control of Pursuit & Evasion

The following MPEG movie sequences illustrate behavior generated by dynamical recurrent neural network controllers co-evolved for pursuit and evasion capabilities. From an initial population of random network designs, successful designs in each generation are selected for reproduction with recombination, mutation, and gene duplication. Selection is based on measures of how well each controller performs in a number of pursuit-evasion contests. In each contest a pursuer controller and an evader controller are pitched against each other, controlling simple “visually guided” 2-dimensional autonomous virtual agents. Both the pursuer and the evader have limited amounts of energy, which is used up in movement, so they have to evolve to move economically. Each contest results in a time-series of position and orientation data for the two agents.

These time-series are then fed into a custom 3-D movie generator. It is important to note that, although the chase behaviors are genuine data, the 3D structures, surface physics, and shading are all purely for illustrative effect.

1. The pursuer is not very good at pursuing, and the evader is not very good at evading.

2. Pursuer chases evader, but soon runs out of energy, allowing the evader to escape.

3. Pursuer chases evader, but uses up all its energy just before the evader runs out of energy.

4. After a couple of close shaves, the pursuer finally catches the evader.

2.2

Learning the Distribution of Object Trajectories for Event Recognition

This research work is about the modeling of object behaviors using detailed, learnt statistical models. The techniques being developed will allow models of characteristic object behaviors to be learnt from the continuous observation of long image sequences. It is hoped that these models of characteristic behaviors will have a number of uses, particularly in automated surveillance and event recognition, allowing the surveillance problem to be approached from a lower level, without the need for high-level scene/behavioral knowledge. Other possible uses include the random generation of realistic looking object behavior for use in Virtual Reality, and long-term prediction of object behaviors to aid occlusion reasoning in object tracking. 

1. The model is learnt in an unsupervised manner by tracking objects over long image sequences, and is based on a combination of a neural network implementing Vector Quantization and a type of neuron with short-term memory capabilities.


1. Learning mode

2. Models of the trajectories of pedestrians have been generated and used to assess the typicality of new trajectories (allowing the identification of `incidents of interest’ within the scene), predict future object trajectories, and randomly generate new trajectories.


2. Predict mode

 2.3

Radiosity for Virtual Reality Systems (ROVER)

The synthesis of actual and computer generated photo-realistic images has been the aim of artists and graphic designers for many decades. Some of the most realistic images (see Graphics Gallery – simulated steel mill) were generated using radiosity techniques. Unlike ray tracing, radiosity models the actual interaction between the lights and the environment. In photo realistic Virtual Reality (VR) environments, the need for quick feedback based on user actions is crucial. It is generally recognised that traditional implementation of radiosity is computationally very expensive and therefore not feasible for use in VR systems where practical data sets are of huge complexity. In the original thesis, we introduce two new methods and several hybrid techniques to the radiosity research community on using radiosity in VR applications.

On the left column, flyby, walkthrough and a virtual space are first introduced and on the left. On the right, we showcase one of the two novel methods which were proposed using Neural Network technology.

Introduction to Flyby, Walkthrough and Virtual Space


Flyby


3D Walkthrough


Virtual Space

(A) ROVER Learning from Examples

Sequence 1

Sequence 5

Sequence 8

(B) ROVER Modeling

(C) ROVER Prediction

2.4

Autonomous Walker & Swimming Eel

(A) The research in this area involves combining biology, mechanical engineering and information technology in order to develop the techniques necessary to build a dynamically stable legged vehicle controlled by a neural network. This would incorporate command signals, sensory feedback and reflex circuitry in order to produce the desired movement.


Walker

(B) Simulation of the swimming lamprey (eel-like sea creature), driven by a neural network.


Swimming Lamprey

2.5

Robocup: Robot World Cup

The RoboCup Competition pits robots (real and virtual) against each other in a simulated soccer tournament. The aim of the RoboCup competition is to foster an interdisciplinary approach to robotics and agent-based AI by presenting a domain that requires large-scale coorperation and coordination in a dynamic, noisy, complex environment.

RoboCup has three different leagues to-date. The Small and Middle-Size Leagues involved physical robots; the Simulation League is for virtual, synthetic teams. This work focus on building softbots for the Simulation League.

Machine Learning for Robocup involves:

  1. The training of player in the process of making the decision of whether (a) to dribble the ball; (b) to pass it on to another team-mate; (c) to shoot into the net.

  2. The training of the goalkeeper in process of intelligent guessing of how the ball is going to be kick by the opponents. Complexities arise when one opponent decides to pass the ball to another player instead of attempting a score.

  3. Evolution of a co-operative and perhaps unpredictable team.

Common AI methods used are variants of Neural Networks and Genetic Algorithms.


KRDL Soccer Softbots (3.1mb, AVI)

2.6

Using HMM’s for Audio-to-Visual Conversion

One emerging application which exploits the correlation between audio and video is speech-driven facial animation. The goal of speech-driven facial animation is to synthesize realistic video sequences from acoustic speech. Much of the previous research has implemented this audio-to-visual conversion strategy with existing techniques such as vector quantization and neural networks. Here, they examine how this conversion process can be accomplished with hidden Markov models (HMM).

(A) Tracking Demo: The parabolic contour is fit to each frame of the video sequence using a modified deformable template algorithm. The height between the two contours, and the width between the corners of the mouth can be extracted from the templates to form our visual parameter sets.


Tracking

(B) Morphing Demo: Another important piece of the speech-driven facial animation system is a visual synthesis module. Here we are attempting to synthesize the word “wow” from a single image. Each frame in the video sequence is morphed from the first frame shown below. The parameters used to morph these images were obtained by hand.


Morphing

2.7 Artificial Life: Galapagos

Galapagos is a fantastic and dangerous place where up and down have no meaning, where rivers of iridescent acid and high-energy laser mines are beautiful but deadly artifacts of some other time. Through spatially twisted puzzles and bewildering cyber-landscapes, the artificial creature called Mendel struggles to survive, and you must help him.

Mendel is a synthetic organism that can sense infrared radiation and tactile stimulus. His mind is an advanced adaptive controller featuring Non-stationary Entropic Reduction Mapping — a new form of artificial life technology developed by Anark. He can learn like your dog, he can adapt to hostile environments like a cockroach, but he can’t solve the puzzles that prevent his escape from Galapagos.

Galapagos features rich, 3D texture-mapped worlds, with continuous-motion graphics and 6 degrees of freedom. Dramatic camera movement and incredible lighting effects make your passage through Galapagos breathtaking. Explosions and other chilling effects will make you fear for your synthetic friend. Active panning 3D stereo sound will draw you into the exotic worlds of Galapagos.


Galapagos

 2.8 Speechreading (Lipreading)

As part of the research program Neuroinformatik the IPVR develops a neural speechreading system as part of a user interface for a workstation. The three main parts of the system include a face tracker (done by Marco Sommerau), lip modeling and speech processing (done by Michael Vogt) and the development and application of SNNS for neural network training (done by Günter Mamier).

Automatic speechreading is based on a robust lip image analysis. In this approach, no special illumination or lip make-up is used. The analysis is based on true color video images. The system allows for realtime tracking and storage of the lip region and robust off-line lip model matching. The proposed model is based on cubic outline curves. A neural classifier detects visibility of teeth edges and other attributes. At this stage of the approach the edge between the closed lips is automatically modeled if applicable, based on a neural network’s decision.

To achieve high flexibility during lip-model development, a model description language has been defined and implemented. The language allows the definition of edge models (in general) based on knots and edge functions. Inner model forces stabilize the overall model shape. User defined image processing functions may be applied along the model edges. These functions and the inner forces contribute to an overall energy function. Adaptation of the model is done by gradient descent or simulated annealing like algorithms. The figure shows one configuration of the lip model, consisting of an upper lip edge and a lower lip edge. The model edges are defined by Bezier-functions. Outer control knots stabilize the position of the corners of the mouth.

Fig 2.8.1 The model interpreter enables a permanent measurement of model knot positions and color blends along model edges during adaptation to an utterance. The resulting parameters may be used for speech recognition tasks in further steps.


Lipread

2.9 Detection and Tracking of Moving Targets

The moving target detection and track methods here are “track before detect” methods. They correlate sensor data versus time and location, based on the nature of actual tracks. The track statistics are “learned” based on artificial neural network (ANN) training with prior real or simulated data. Effects of different clutter backgrounds are partially compensated based on space-time-adaptive processing of the sensor inputs, and further compensated based on the ANN training. Specific processing structures are adapted to the target track statistics and sensor characteristics of interest. Fusion of data over multiple wavelengths and sensors is also supported.

Compared to conventional fixed matched filter techniques, these methods have been shown to reduce false alarm rates by up to a factor of 1000 based on simulated SBIRS data for very weak ICBM targets against cloud and nuclear backgrounds, with photon, quantization, and thermal noise, and sensor jitter included. Examples of the backgrounds, and processing results, are given below.

The methods are designed to overcome the weaknesses of other advanced track-before-detect methods, such as 3+-D (space, time, etc.) matched filtering, dynamic programming (DP), and multi-hypothesis tracking (MHT). Loosely speaking, 3+-D matched filtering requires too many filters in practice for long-term track correlation; DP cannot realistically exploit the non-Markovian nature of real tracks, and strong targets mask out weak targets; and MHT cannot support the low pre-detection thresholds required for very weak targets in high clutter. They have developed and tested versions of the above (and other) methods in their research, as well as Kalman-filter probabilistic data association (KF/PDA) methods, which they use for post-detection tracking.

Space-time-adaptive methods are used to deal with correlated, non-stationary, non-Gaussian clutter, followed by a multi-stage filter sequence and soft-thresholding units that combine current and prior sensor data, plus feed back of prior outputs, to estimate the probability of target presence. The details are optimized by adaptive “training” over very large data sets, and special methods are used to maximize the efficiency of this training.


Figure 2.9 (a) Raw input backgrounds with weak targets included,
(b) Detected target sequence at the ANN processing output,
post-detection tracking not included. Video Clip

2.10 Real-time Target Identification for Security Applications

The system localizes and tracks peoples’ faces as they move through a scene. It integrates the following techniques:

  • Motion detection

  • Tracking people based upon motion

  • Tracking faces using an appearance model

Faces are tracked robustly by integrating motion and model-based tracking.

(A) Tracking in low resolution and poor lighting conditions


Jon

(B) Tracking two people simultaneously: lock is maintained on the faces despite unreliable motion-based body tracking.


Double Tracking

2.11

Facial Animation

Facial animations created using hierarchical B-spline as the underlying surface representation. Neural networks could be use for learning of each variation in the face expressions for animated sequences.

The (mask) model was created in SoftImage, and is an early prototype for the character “Mouse” in the YTV/ABC televisions series “ReBoot” (They do not use hierarchical splines for Reboot!). The original standard bicubic B-spline was imported to the “Dragon” editor and a hierarchy automatically constructed. The surface was attached to a jaw to allow it to open and close the mouth. Groups of control vertices were then moved around to created various facial expressions. Three of these expressions were chosen as key shapes, the spline surface was exported back to SoftImage, and the key shapes were interpolated to create the final animation.


Mask


Haida

2.12

Artificial Life for Graphics, Animation, Multimedia, and Virtual Reality

Some graphics researchers have begun to explore a new frontier–a world of objects of enormously greater complexity than is typically accessible through physical modeling alone–objects that are alive. The modeling and simulation of living systems for computer graphics resonates with the burgeoning field of scientific inquiry called Artificial Life. Conceptually, artificial life transcends the traditional boundaries of computer science and biological science. The natural synergy between computer graphics and artificial life can be potentially beneficial to both disciplines. As some of the demos here demonstrate, potential is becoming fulfillment.

The demos demonstrate and elucidate new models that realistically emulate a broad variety of living things–both plants and animals–from lower animals all the way up the evolutionary ladder to humans. Typically, these models inhabit virtual worlds in which they are subject to physical laws. Consequently, they often make use of physics-based modeling techniques. More significantly, however, they must also simulate many of the natural processes that uniquely characterize living systems–such as birth and death, growth, natural selection, evolution, perception, locomotion, manipulation, adaptive behavior, intelligence, and learning. The challenge is to develop sophisticated graphics models that are self-creating, self-evolving, self-controlling, and/or self-animating by simulating the natural mechanisms fundamental to life.


A.Dog


Evolved Virtual Creatures


Sensor-Based Autonomous Creatures


A. Fish

2.13 Creatures: The World Most Advanced Artificial Life!

Creatures is the most entertaining computer game you’ll ever play which offers nothing to shoot, no puzzles to solve or difficult controls to master. And yet it is mesmerizing entertainment.
One has to raise, teach, breed and love computer pets that are really alive. They are so alive that if it is not taken care of, they will die. Creatures’ features the most advanced, genuine Artificial Life software ever developed in a commercial product, technology that has blown the imaginations of scientists world-wide. This is a look into the future where new species of life emerge from ordinary home and office PCs.


Creatures

 

The following MPEG movie sequences illustrate behavior generated by dynamical recurrent neural network controllers co-evolved for pursuit and evasion capabilities. From an initial population of random network designs, successful designs in each generation are selected for reproduction with recombination, mutation, and gene duplication. Selection is based on measures of how well each controller performs in a number of pursuit-evasion contests. In each contest a pursuer controller and an evader controller are pitched against each other, controlling simple “visually guided” 2-dimensional autonomous virtual agents. Both the pursuer and the evader have limited amounts of energy, which is used up in movement, so they have to evolve to move economically. Each contest results in a time-series of position and orientation data for the two agents.

These time-series are then fed into a custom 3-D movie generator. It is important to note that, although the chase behaviors are genuine data, the 3D structures, surface physics, and shading are all purely for illustrative effect.

1. The pursuer is not very good at pursuing, and the evader is not very good at evading.

2. Pursuer chases evader, but soon runs out of energy, allowing the evader to escape.

3. Pursuer chases evader, but uses up all its energy just before the evader runs out of energy.

4. After a couple of close shaves, the pursuer finally catches the evader.

OTC Flow

Download Link : OTC Flow

OTC Flow

  1. SALES ORDER

Company Code: 4700

Sales Document type: ZOR

Sales Area: 4700/10/10

Screen Shot 1:

Click Enter

Step 2:

Header Level:

  • Enter Sold to party: 1000991
  • Enter Ship to party: 1000991
  • Enter PO Number: Test
  • Enter Payment term: 0001

Item Level:

  • EnterMaterialsinItemLevel:1000309
  • Enter order Qty: 1
  • Enter Plant: 4702
  • Click Enter:

Refer Screen Shot:

GotoItemBilling

EntertheINCOTermsintheItemLevels:CFR(COSTANDFREIGHT)

Click Save

  1. DELIVERY

DeliveryDocumenttype:ZLF

T – Code VL01N

  • Enter Shipping Point: 4702
  • Enter Sales document number:
  • Note:EnterDeliverydate:(DeliverydateshouldbetakenfromSchedulelinesinSalesorder)ReferScreenShotbelow:

Click Enter

Enter Picking Qty & Storage Location:

Click on Post Goods Issue:

  1. BILLING

Billing Document type: F2

T – Code – VF01

Enter Delivery Document number:

Refer Screen Shot Below:

Click Enter

Refer Screen Shot below:

Click Save

Billing Document should be created (Refer Screen Shot below)

Display the Billing document: T – Code – VF03

EntertheBillingDocumentnumberfromtheaboveScreenshot:

Click on Accounting Document tab:

Refer the Screen Shot:

Order To Cash (OTC)

Download Link : order to cash
Order To Cash (OTC)
Order to cash normally refers to the enterprise resource planning (ERP) process in which taking
customer sales (direct from the customer & retail) orders via different sales channels, such as email,
internet, sales person, fax or by some other means like EDI, and then fulfilling the order, shipping,
logistic and then generating an invoice and collecting payment for that invoice and then receipt.
If we consider the entire flow, this can be further categorized into the following seven sub-
processes:
• Customer presence
• Order entry (creation of order/booking of order )
• Order fulfillment (physical & digital fulfillment)
• Distribution
• Invoicing
• Customer payments/collection
• Receipt
Purpose
The SAP Best Practices scenario for Order to Cash supports the entire process chain for a typical
sales process with a customer. The business process encompasses all steps from creating an order,
and optionally, based on a quotation, creation of a delivery, to the billing procedure. During the
sales order generation a credit check for the customer is executed and subsequent handling of
blocked sales documents is demonstrated. An availability check is done followed by product
allocation. Product allocations represent an ordered allocation of production for certain periods, so
that a partial quantity can be delivered if not enough stock is available for further orders.
Additionally, Service Charges are entered manually in the sales order, depending on the quantity of
goods ordered. In delivery processing the delivery is created, the goods are picked, kitted, packed,
shipped and the goods issue is posted. In the billing process that follows, an invoice is created and
released to financial accounting. To complete the process, the customer payment is posted to clear
the accounts receivable.
This scenario also includes additional presales support activities in addition to sales order
processing, delivery, billing and payment. The process begins with a sales inquiry captured in a
sales activity document in sales support. The inquiry results in a sale and the process shows how the
initial sales activity can be linked to a sales document that is created in the subsequent order
processing. At the time of order creation, dynamic product proposals, material substitutions, free
goods and material exclusions are demonstrated. At delivery processing the delivery is created, then
picked, and goods issue is posted. In the billing process that follows, an invoice is created and
released to financial accounting. Incoming payments are documented in payment processing and
then posted in financials.
Process Flow
 Sales Quotation
 Standard Order
 Shipping
 Delivery
 Picking
 Posting Goods Issue
 Warehouse Picking Execution
 Packing
 Posting Goods Issue
 Billing
 Payment of Customer
This process flow encapsulates a variety of smaller business processes from order entry to cash
receipt. It pulls resources from many different company departments.
Improving the order-to-cash process is a strategic priority for many companies. Typical
improvement objectives include fulfillment performance (order accuracy, shipment accuracy, and
on-time shipping), financial performance (reduction of receivables, collection management costs,
and Days of Sales Outstanding, or DSOs).
The multi-step order-to-cash process originates with a customer order and terminates once the
customer pays for the goods or services received and the company applies the cash.
Five areas are affected by the order-to-cash cycle:
1. Customer,
2. Order entry,
3. Order fulfillment,
4. Distribution, and
5. Finance and accounting.
Since most companies are functionally managed, the order-to-cash process usually touches multiple
departments, companies, and back-end enterprise applications. Therefore, it is important for each
department to complete its part of the overall process error-free and transfer correct information
across functional boundaries.
Automated Sales Order Processing for Order-to-Cash Performance with ERP Systems
Overview
Business performance depends on how well a company manages its internal processes. Companies
with effective business process management in place are able to analyze key performance indicators
to monitor efficiency of day-to-day activities and employees against operational targets.
Order-to-cash is a generic term used to encompass the business cycle that starts with reception of a
customer sales order and ends with collection of accounts receivable generated in the sale of the
final product. There are several sub-processes within the order-to-cash cycle, including: receiving
orders, entering sales orders, approving sales orders, fulfilling orders, billing for the orders and
collecting payment.
Many companies have implemented enterprise resource planning (ERP) applications to standardize
enterprise operations and support business process management strategies. ERP solutions empower
companies to automate many business processes formerly done by hand. But to achieve full return
on investment in ERP solutions, businesses need to automate the documents that drive business
processes.
Order-to-Cash Service Platforms
Businesses are investing significantly in software that integrates various applications and processes
onto a single service platform. This way every participant in the composite process has the same
view of every action and event.
Companies that implement the order-to-cash process internally often break the process down into
smaller pieces (order-to-ship, ship-to-delivery, delivery-to-invoice, and invoice-to-payment).
UPS, FedEx, and others are increasingly offering order-to-cash service platforms to small and
midsize businesses. The shippers handle all parts of the end-to-end process: order flow, fulfillment,
and payment.
Technical Capabilities of Order-to-Cash Service Platforms
Every robust order-to-cash service platform provides the comprehensive process management
capabilities needed to automate the underlying order-to-cash business processes. The combination
of robust business process management, componentization of processes and systems, and native
support of Web Services standards is at the core of the service platform. Some of the capabilities
include:
• Unified process automation and human workflow,
• Automated, intelligent management of data, process exceptions, and errors,
• State management to track, store, and intelligently act on complete status of each step (or
state) of a multi-step transaction,
• Transaction rollbacks and compensating transactions,
• Time-based exception management, and
• Adaptable business agents that monitor real-time metrics and adjust themselves
automatically according to predefined rules.
Hypothetical test scenario: Order-to-close scenario
Total time to execute manually including recording test results: 25 hours.
Frequency: Executed five times during the year to support major system releases.
Stability: Process is subject to few minor modifications per year (two minor modifications). Fairly
static.
Preparation: On average, 15 hours are spent manually rehearsing the test scenario before it is fully
executed.
Number of assigned testers: Three testers (having expertise in project systems [PS] module, finance
[FI], and sales and distribution [SD] module).
Given these metrics, it is possible to estimate with some margin of error that between preparation
and execution of the manual test case (including manually recording test results) approximately 200
man-hours per year are spent executing the order-to-cash scenario.
This is not including time needed to manually modify the documentation for the test case when the
order-to-cash scenario is subject to configuration changes, or the time needed to coordinate the
multiple resources that are necessary for executing the test scenario. With an automated framework
in place, one can review the following statistics and metrics needed to automate the order-to-cash
scenario and whether doing so is cost effective:
Total hours needed to automate test case (including functional support): 80 hours.
Time needed to execute process with automated test tool (including automatic test results [logs]
generated by automated test tools): Two hours.
Number of resources needed to execute automated test case: One at most, since automated test case
can be scheduled to run unattended.
Preparation time needed to execute automated test case: Five.
Under the hypothetical scenario for automating from scratch and executing the automated test case
for order-to-cash, it is estimated that for the first year it would take 115 man-hours to execute the
automated test case. For subsequent years it would take 35 man hours to execute the automated test
since the automated test case has already been constructed, whereas executing the process manually
is a fixed number of man-hours: 200 man-hours per year, subject to the availability of the testing
resources and level of expertise. This analysis points objectively and based on certain assumption to
a case in favor of automation. With a similar analysis, projects can employ an objective approach
for automating scenarios.
OTC Flow
1. SALES ORDER
Company Code: 4700
Sales Document type: ZOR
Sales Area: 4700/10/10
Screen Shot 1:
Click Enter
Step 2:
Header Level:
 Enter Sold to party: 1000991
 Enter Ship to party: 1000991
 Enter PO Number: Test
 Enter Payment term: 0001
Item Level:
 Enter Materials in Item Level: 1000309
 Enter order Qty: 1
 Enter Plant: 4702
 Click Enter:
Refer Screen Shot:
Go to Item Billing
Enter the INCO Terms in the Item Levels: CFR (COST AND FREIGHT)
Click Save
2. DELIVERY
Delivery Document type: ZLF
T – Code VL01N
 Enter Shipping Point: 4702
 Enter Sales document number:
 Note: Enter Delivery date: (Delivery date should be taken from Schedule lines in Sales
order) Refer Screen Shot below:
Click Enter
Enter Picking Qty & Storage Location:
Click on Post Goods Issue:
3. BILLING
Billing Document type: F2
T – Code – VF01
Enter Delivery Document number:
Refer Screen Shot Below:
Click Enter
Refer Screen Shot below:
Click Save
Billing Document should be created (Refer Screen Shot below)
Display the Billing document: T – Code – VF03
Enter the Billing Document number from the above Screen shot:
Click on Accounting Document tab:
Refer the Screen Shot:

Status Quo : Review Of Some Testing Practices

Download link : Status Quo Review of Existing Testing Practices

Status Quo Review Of Existing Testing Practices
Existing Methodology 
When reviewing the status quo, companies implementing SAP need to assess what software
methodology or approach guides the work products and deliverables of the SAP resources,
including the SAP testing team.
Large SAP system integrators such as Deloitte Consulting and IBM offer methodologies and
implementation guides such as Thread Manager and AscendantTM for either upgrading or initial
installations of SAP. SAP itself offers the SAP Roadmap methodology embedded within the
Solution Manager platform. Recognized bodies such as IEEE, SEI, and the U.S. Department of Defense (DoD) 5000 series directives for life-cycle management and acquisition, to name a few,
also provide software methodologies for implementing an ERP/Customer Relationship
Management (CRM) solution such as SAP R/3.
Corporations that are missing a recognized methodology for implementing SAP can rely on
software approaches that conform to the waterfall, spiral, and evolutionary models. These models
offer different approaches for implementing software that include prototyping, dealing with
programs that have a large scope, or unstable requirements. Depending on the size of the
corporation implementing SAP, it is possible that the corporation already has other large software
initiatives and a successful life cycle for doing so that can be leveraged for implementing SAP.
A successful software methodology, whether created in-house or adopted from another body, needs
to have templates, accelerators, and white papers for testing ERP applications. Methodologies
specifically designed for building software from scratch or from the ground up may not be suitable
for implementing an out-of-the-box solution such as SAP and thus not offer any relevant guidance
for testing SAP.
The project and test manager must provide special attention to the project’s methodologies and how
existing testing activities and tasks conform and align to the project’s methodologies. If no formal
methodology exists within the project, then efforts must be taken to ensure that the testing approach
and test plans are adequate for the project to help fulfill testing criteria.
Fig. Various ways to test a software
Drawbacks to manual testing
While manual testing may be the best option for a high percentage of projects, it is not without its
shortcomings.
For example:
•Manual tests can simply take too long—testers must tediously document each step of a test case
and manually execute each test, reproduce defects, and so on.
• The dramatic increase in complexity of today’s computing environments is amplifying test coverage requirements, creating more pressure to move to automated testing.
•Corporate globalization and geographically dispersed teams create a need for standardized
testing processes, which manual testing does not readily facilitate.
•When there is no automated process for testing, there is typically no automated way to keep
documentation synchronized with the testing process; each element of the test plan is a separate
entity and every change must be managed and maintained individually.
•Manual tests are subject to higher risk of mistakes and oversights than automated tests.
The disadvantages of record and playback only become apparent as you begin to use the tool over
time. Capture replay always looks very impressive when first demonstrated, but is not a good basis
for a productive long-term test automation regime. The script, as recorded, may not be very
readable to someone picking it up afterwards. The only value of an automated test is in its reuse. A
raw recorded script explains nothing about what is being tested or what the purpose of the test is.
Such commentary has to be inserted by the tester, either as the recording is made (not all tools allow
this) or by editing the script after recording has finished. Without this information any maintenance
task is likely to be difficult at best.
A raw recorded script is also tied very tightly to the specifics of what was recorded. Depending on
the tool, it may be bound to objects on the screen, specific character strings, or even screen bitmap
positions. If the software changes - correction: when the software changes - the original script will
no longer work correctly if anything to which it is tightly bound has changed. Often the effort
involved in updating the script itself is much greater than re-recording the script while running the
test manually again. This usually does not give any test automation benefit. For example, the
values of the inputs recorded are the exact and detailed values as entered, but these are now 'hard-
coded' into the script. The recording is simply of actions and test inputs. But usually the
reason for running a test is to look at the test outcome to see whether the application does the right
thing.
Simply recording test inputs does not include any verification of the results. Verification must be
added in to any recorded script in order to make it a test. Depending on the tool, it may be possible
to do this additional work during recording (but it takes additional time and effort) or during the
first replay, otherwise it will be necessary to edit the recorded script.
Manual tests are often labor intensive, time consuming, inconsistent, boring, and lengthy, and
comparison of test results is tedious and error prone. At first glance, these problems look ideal for
test automation, and indeed that may be true. However, it is not necessarily the only solution to
these problems.
The first question to ask is whether these manual tests actually give value for money. If they are too
lengthy, weeding out ineffective or redundant tests could shorten them. This may enable them to be
run manually within a shorter time frame. If the tests take too much elapsed time to run manually,
perhaps recruiting more testers would help. If the tests are very labor intensive, perhaps they could
be redesigned to require less effort per test, so the manual testing would be more productive. For
example, a test may require testers to sit at different machines in different rooms. If all of the test
machines were moved into one room, one tester may be able to oversee two or more machines at
the same time.
If the test input or comparison of results is error prone, perhaps the test procedures are unclear.
Have the testers been trained in how to input, execute, and analyze the tests correctly? Are they
aware of the importance of the correctness of test results?
Comparison of test results is probably one of the best uses of a computer. Most test execution tools
include some comparison facilities. However, most operating systems also have comparison utilities
that can be used to good effect, whether or not you have a comparison tool.
If executing the current tests is boring, this probably does indicate a need for tool support of some
kind. Things which people find boring are often done better by a computer.
Setting up test data or test cases is repetitive and 'mechanical'; the testers find it boring and make
too many 'simple' errors. This problem is a good candidate for a test execution tool. On the other
hand, why are test cases and test data being set up in this way? It may be better to organize the test
data into pre-packaged sets which could be called upon when needed, rather than setting them up
every time, particularly if this is an error-prone process.
Test documentation serves different purposes. Test plans contain management information about
the testing process as it should be carried out. Test scripts contain information about the detail of
tests to be run, such as what the inputs and test data arc. Test reports contain information about the
progress of tests that have been run. An area where test execution tools can help overcome
documentation problems is where inadequate records are kept of what tests have been executed. If
careful records are not kept, tests may be repeated or omitted, or you will not know whether or not
tests have been run. The test log does provide an audit trail (although it may not necessarily be easy
to find the information you want from the raw logs produced by the tool).
An automated solution often 'looks better' and may be easier to authorize expenditure for than
addressing the more fundamental problems of the testing process itself. It is important to realize that
the tool will not correct a poor process without additional attention being paid to it. It is possible to
improve testing practices alongside implementing the tool, but it does require conscious effort.
The right time is:
when there are no major organizational upheavals or panics in progress;
when one person has responsibility for choosing and implementing the tool(s);
when people are dissatisfied with the current state of testing practice;
when there is commitment from top management to authorize and support the tooling-up effort.
If one or more of these conditions do not apply to your organization it does
not mean that you should not attempt to introduce test automation. It
merely implies that doing so may be somewhat more difficult.