PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7348, including the Title Page, Copyright information, table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The output of a sensor network intended to detect events or objects generally comprises evidentiary reports of features in
the environment that may correspond to those phenomena. Signals from multiple sensors are commonly fused to
maximize fidelity of detection through for example synergy between different modes of detection, or simple
confirmation. We have previously demonstrated the ability to calculate the meaning of a location report as a probability
distribution over potential ground truths by using a stochastic process algebraic model compiled to a discrete-state,
continuous-time Markov chain, and performing a transient analysis which resembles the process of parameterizing a
Bayesian network. We introduce an approach to representing temporal fusion of multiple heterogeneous sensor
detections with different modalities and timing characteristics using a stochastic process algebra. This facilitates analysis
of probabilistic properties of the system, and inclusion of those properties into larger models. The formal models are
translated into continuous time Markov chains, which provide an important trade-off between the approximation of
timing information against complexity of analysis. This is vital to the investigation of analytic computation in real world
problems. We illustrate this with an example detection-oriented sensing service model emphasizing the impact of timing.
Detection probability and confidence is an essential aspect of the quality of information delivered by a sensing service.
The present work is part of an effort to develop a formal event detection calculus that captures the essence of sensor
information relating to events, such that features and dependencies can be exploited in re-usable, extendible
compositional models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thales-Raytheon Systems' Firefinder PC Simulation (PCS) tool allows a rapid simulated evaluation of Firefinder radar
performance from a personal desktop computer. Firefinder radars are designed to track hostile rocket, artillery and
mortar projectiles in order to accurately estimate weapon ground location. The Firefinder tactical code is used within
PCS. This design provides a low risk path to rapid prototyping and evaluation of candidate software changes. PCS is
used to evaluate candidate software changes to the Firefinder. Candidate design changes which perform well in PCS
testing require minimum system level checkout before being checked into the tactical software baseline. The PCS tool
contains a simulation engine which reads program control information from input data files. The PCS tool also generates
and maintains simulated targets and clutter, simulates the radar signal processing function, performs Monte-Carlo
"batch" processing, produces complex target trajectories internally or from an input text file and creates simulation data
recording files identical in format to those created by the actual radar. This paper summarizes the capabilities of the
Firefinder PCS and the additional of false location reduction features to the simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Irma synthetic signature prediction code is being developed by the Munitions Directorate of the Air Force Research
Laboratory (AFRL/RWGG) to facilitate the research and development of advanced weapon seekers Irma began as a
high-resolution, physics-based infrared (IR) target and background signature model for tactical weapon applications and
has grown to include: a laser (or active) channel (1990), improved scene generator to support correlated frame-to-frame
imagery (1992), and a passive IR/millimeter wave (MMW) channel for a co-registered active/passive IR/MMW model
(1994). Irma version 5.0 was released in 2000 and encompassed several upgrades to both the physical models and
software; host support was expanded to Windows, Linux, Solaris, and SGI Irix platforms. In 2005, version 5.1 was
released after extensive verification and validation of an upgraded and reengineered ladar channel. In 2007, version 5.2
was released with a reengineered passive channel. The current Irma development effort is focused on the reengineering
of the radar channel with an expected release of Irma 5.3 in 2009. This paper reports on two of the radar modes
expected to be supported in the radar channel: the fuze mode and the spotlight synthetic aperture radar (SAR) mode.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the key aspects for the design of a next generation weapon system is the need to operate in cluttered and complex
urban environments. Simulation systems rely on accurate representation of these environments and require automated
software tools to construct the underlying 3D geometry and associated spectral and material properties that are then
formatted for various objective seeker simulation systems. Under an Air Force Small Business Innovative Research
(SBIR) contract, we have developed an automated process to generate 3D urban environments with user defined
properties. These environments can be composed from a wide variety of source materials, including vector source data,
pre-existing 3D models, and digital elevation models, and rapidly organized into a geo-specific visual simulation
database. This intermediate representation can be easily inspected in the visible spectrum for content and organization
and interactively queried for accuracy. Once the database contains the required contents, it can then be exported into
specific synthetic scene generation runtime formats, preserving the relationship between geometry and material
properties. To date an exporter for the Irma simulation system developed and maintained by AFRL/Eglin has been
created and a second exporter to Real Time Composite Hardbody and Missile Plume (CHAMP) simulation system for
real-time use is currently being developed. This process supports significantly more complex target environments than
previous approaches to database generation. In this paper we describe the capabilities for content creation for advanced
seeker processing algorithms simulation and sensor stimulation, including the overall database compilation process and
sample databases produced and exported for the Irma runtime system. We also discuss the addition of object dynamics
and viewer dynamics within the visual simulation into the Irma runtime environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An integrated array computational imaging system, dubbed PERIODIC, is presented which is capable of exploiting a
diverse variety of optical information including sub-pixel displacements, phase, polarization, intensity, and
wavelength. Several applications of this technology will be presented including digital superresolution, enhanced
dynamic range and multi-spectral imaging. Other applications include polarization based dehazing, extended depth of
field and 3D imaging. The optical hardware system and software algorithms are described, and sample results are
shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital lock-in algorithms are routinely used to detect and measure very small AC signals acquired through digital signal
analysis equipment, even if the signal may be obscured by coherent or incoherent noise sources many thousands of times
larger. Unfortunately, these algorithms are iterative, normally quite complicated in functionality, and utilize digital
filters which require large time constants if operating at low frequencies. We have developed passive millimeter-wave
imaging systems which have application in defense, security and safety applications. Passive millimeter-wave imaging
is challenging in that the amount of energy measured from a scene at these wavelengths is 108 times smaller than
energies emitted from terrestrial objects when viewed in the infrared region. As a result, the small measured signal is
buried deep in the noise floor. Our imaging systems rely on single pixel rasterizing, where it is desired that the
computational time per pixel be small and fixed to avoid spatial resolution problems, but iterative algorithms create
spatial registration problems and large time-constant digital filters result in greater than desired total scan times due to
longer pixel acquisition periods. A digital lock-in algorithm utilizing a closed form least squares method was developed
to resolve these issues. The result was the elimination of digital filtering with their time constants and the replacement
of iterative routines with a fixed time computational model where the overhead per pixel was shortened by more than
103. This novel algorithmic approach is portable and can be used wherever digital lock-in is currently utilized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The acquisition of high-resolution imagery is necessary in a wide variety of fields, such as intelligence gathering,
surveillance, and other defense applications. The quality of footage typically determines the usefulness of the obtained
information, yet, the use of low-resolution imaging devices may be unavoidable under circumstances where highresolution
equipment is unavailable or impossible to deploy. In these scenarios, super resolution methods can be applied
to recover lost detail. These methods generally use computationally intense routines to process a series of low-resolution
input frames in order to generate a higher-resolution output. Because of the algorithms' computational intensity, realtime
operation for moderately-sized frames cannot be realized using general-purpose CPU technology. Modern graphics
processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing real-time
operation to be achieved. This paper presents the development of a GPU-accelerated super resolution implementation.
The algorithm presented here employs gradient-based registration, weighted nearest neighbor (WNN) interpolation
techniques, and Wiener filtering. This accelerated implementation performs at speeds 40 times that of a conventional a
CPU implementation, and achieves processing rates suitable for valuable real-time applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Close Air Support (CAS) is the use of air power in close proximity to friendly forces against enemy combatants. CAS
requires precise and detailed communication between the personnel on the ground and the air vehicles. To be useful, a
network simulation should be a superposition on the planning simulations for these activities. In a CAS mission, all of
the above activities are critical. A hypothetical CAS mission is modeled as an "as is" solution with stove-piped
communications and a "to be" network enabled solution. A co-simulation laboratory using OPNET with SITL
(SYSTEM in the Loop, cosim, JFORCES (Joint Force Operational Readiness Combat Effectiveness Simulator), and
JSAF (Joint Semi-Automated Forces) simulation system is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new methodology for assessing the risks associated with the level of Verification,
Validation and Accreditation (VV&A) of a given Model and/or Simulation when used in support of major
decisions. As stated by DoD Instruction 5000.61 , "It is DoD policy that: Models and Simulations (M&S)
used to support major DoD decision-making organizations and processes shall be accredited for that
specific purpose by the DoD Component M&S Application Sponsor." This Instruction applies to "All
models and simulations developed, used, or managed by the DoD Components after the effective date of
this Instruction." The requirements cited above have set the need for VV&A of M&S at the forefront of
concerns for DoD and DoD-Components acquisition personnel. When an acquisition program involves a
large number of models, cost associated with VV&A can become enormous. There is a need therefore to
have a systematic approach for assessing and prioritizing the risks associated with the level to which
individual models have been verified, validated, and accredited. To provide decision makers with a
judicious way for determining the risks associated with using a given M&S, and the extent to which VV&A
work will be needed to meet these requirements, we have developed a methodology for assessing the risks
associated with the level of VV&A of a given M&S when used to support decision-making. This approach
parallels the formal DoD Risk Assessment procedure, but with application to the use of M&S, as it relates
to VV&A. Risks associated with the levels of VV&A are evaluated and assessed based on the following
criteria:
- Likelihood of the M&S being inaccurate and/or inappropriate for the application.
- Consequences of the M&S being inaccurate and/or inappropriate for the
application.
- Importance of the (acquisition) decision supported by the M&S.
- Level of Reliance of the (acquisition) decision on the M&S.
The assessment results are used to classify individual models into three risk categories
(Red for High Risk, Yellow for Medium Risk, and Green for Low Risk). All models
classified as High Risk, are subjected to further analysis and recommendations made as to
further work needed to reduce the risk to an acceptable level through formal VV&A.
Models assessed as Medium Risk are also further evaluated and recommendations made
with regards to the need for further risk mitigation. Models rated Low Risk are either not
of primary importance in supporting the (acquisition) decisions being made, or they may
have been assessed to have been satisfactorily verified, validated, and possibly accredited
to an acceptable level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the first challenges in addressing the need for Modeling & Simulation (M&S) Verification, Validation, &
Accreditation (VV&A) is to develop an approach for applying structured and formalized VV&A processes. The P-8A
Poseidon Multi-Mission Maritime Aircraft (MMA) Program Modeling and Simulation Accreditation Strategy documents
the P-8A program's approach to VV&A. The P-8A strategy tailors a risk-based approach and leverages existing bodies
of knowledge, such as the Defense Modeling and Simulation Office Recommended Practice Guide (DMSO RPG), to
make the process practical and efficient. As the program progresses, the M&S team must continue to look for ways to
streamline the process, add supplemental steps to enhance the process, and identify and overcome procedural,
organizational, and cultural challenges. This paper includes some of the basics of the overall strategy, examples of
specific approaches that have worked well, and examples of challenges that the M&S team has faced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To foster shared battlespace awareness among air strategy planners, BAE Systems has developed Commander's Model
Integration and Simulation Toolkit (CMIST), an Integrated Development Environment for authoring, integration,
validation, and debugging of models relating multiple domains, including political, military, social, economic and
information. CMIST provides a unified graphical user interface for such systems of systems modeling, spanning several
disparate modeling paradigms. Here, we briefly review the CMIST architecture and then compare modeling results using
two approaches to intent modeling. The first uses reactive agents with simplified behavior models that apply rule-based
triggers to initiate actions based solely on observations of the external world at the current time in the simulation. The
second method models proactive agents running an embedded CMIST simulation representing their projection of how
events may unfold in the future in order to take early preventative action. Finally, we discuss a recent extension to
CMIST that incorporates Temporal Bayesian Knowledge Bases for more sophisticated models of adversarial intent that
are capable of inferring goals and future actions given evidence of current actions at particular times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Military organizations embed information systems and networking technologies into their core mission processes as a
means to increase operational efficiency, improve decision making quality, and shorten the "kill chain". Unfortunately,
this dependence can place the mission at risk when the loss or degradation of the confidentiality, integrity, availability,
non-repudiation, or authenticity of a critical information resource or flow occurs. Since the accuracy, conciseness, and
timeliness of the information used in command decision making processes impacts the quality of these decisions, and
hence, the operational mission outcome; it is imperative to explicitly recognize, quantify, and document critical missioninformation
dependencies in order to gain a true appreciation of operational risk. We conjecture what is needed is a
structured process to provide decision makers with real-time awareness of the status of critical information resources and
timely notification of estimated mission impact, from the time an information incident is declared, until the incident is
fully remediated. In this paper, we discuss our initial research towards the development of a mission impact estimation
engine which fuses information from subject matter experts, historical mission impacts, and explicit mission models to
provide the ability to estimate the mission impacts resulting from an information incident in real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since developing and promoting a Pacific Rim community emergency response simulation software platform called
RimSim, the PARVAC team at the University of Washington has developed a variety of first responder agents who can
participate within a response simulation. Agents implement response heuristics and communications strategies in
conjunction with live players trying to develop their own heuristics and communications strategies to participate in a
successful community response crisis. The effort is facilitated by shared visualization of the affected geographical extent.
We present initial findings from interacting with a wide variety of mixed agent simulation sessions and make the
software available for others to perform their own experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the application of a new parallel and distributed modeling and simulation technology known as
HyperWarpSpeed to facilitate the decision-making process in a time-critical simulated Command and Control
environment. HyperWarpSpeed enables the exploration of multiple decision branches at key decision points within a
single simulation execution. Whereas the traditional Monte Carlo approach re-computes the majority of calculations for
each run, HyperWarpSpeed shares computations between the parallel behaviors resulting in run times that are potentially
orders of magnitude faster.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distributed Mission Operations (DMO) is essentially a type of networked training that pulls in participants from all the
armed services and, increasingly, allies to permit them to "game" and rehearse highly complex campaigns, using a mix
of local, distant, and virtual players. The United States Air Force Research Laboratory (AFRL) is pursuing Science and
Technology (S&T) solutions to address technical challenges associated with distributed communications and
information management as DMO continues to progressively scale up the number, diversity, and geographic dispersal of
participants in training and rehearsal exercises.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe the design and development of the NeoCITIES Simulation task environment. The enhanced
NeoCITIES environment allows repeatable experiments in which artifacts are introduced to improve team performance
and measure quantities such as inference accuracy as a function of crisis tempo, data rate, decision complexity and
individual factors such as induced stress. NeoCITIES was developed to study the effectiveness of cognitive artifacts
within a simulated command and control environment. This paper describes the initial results of a human in the loop
experiment to quantify the effects of data overload on human analyst performance. The experiment involves the
introduction of cognitive aids to support improved team coordination and understanding of team-member interactions in
a simulated extreme events scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of mission training exercises is to immerse the trainees into an environment that enables them to train like
they would fight. The integration of modeling and simulation environments that can seamlessly leverage Live systems,
and Virtual or Constructive models (LVC) as they are available offers a flexible and cost effective solution to extending
the "war-gaming" environment to a realistic mission experience while evolving the development of the net-centric
enterprise. From concept to full production, the impact of new capabilities on the infrastructure and concept of
operations, can be assessed in the context of the enterprise, while also exposing them to the warfighter. Training is
extended to tomorrow's tools, processes, and Tactics, Techniques and Procedures (TTPs).
This paper addresses the challenges of a net-centric modeling and simulation environment that is capable of representing
a net-centric enterprise. An overview of the Air Force Research Laboratory's (AFRL) Airborne Networking Component
Architecture Simulation Environment (AN-CASE) is provide as well as a discussion on how it is being used to assess
technologies for the purpose of experimenting with new infrastructure mechanisms that enhance the scalability and
reliability of the distributed mission operations environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditionally, terrain profilers have been evaluated based on their ability to reproduce measurements made from some
reference device (e.g., a rod and level). The measurement error inherent in these reference measurements has become
significant as terrain profilers have become more accurate. The fundamental technical challenge in the design of terrain
profilers is the removal of vehicle body motion from the height sensor measurement. The objective of this work is to
develop design criteria for an excitation event that will quantitatively highlight the abilities and inadequacies of terrain
profilers by testing the profilers under adverse measurement conditions. The design of a characteristic excitation event
must fulfill two requirements. First, the event should excite the terrain profiler chassis at its primary ride and wheel-hop
frequencies. Using these first two ride frequencies and the suspension damping ratio, relationships are developed that
relate these parameters to the geometric excitation event dimensions. The terrain profiler's test velocity is also
determined based on these frequencies. Second, the excitation event should be simple, light, inexpensive, and
reproducible to ensure that it is used. The result of this work is an excitation event that insures that the terrain profiler
will be excited to its highest attainable amplitude (near resonance). This excitation event provides the first step in
developing an accuracy test for modern terrain profilers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As computational power builds to meet the needs of ground vehicle designers, the focus has begun to shift from
laboratory testing of prototype parts and subsystems to computational simulations of the vehicle. In the automotive and
defense industries, large strides have been made in simulating full vehicle responses, such as durability. These
simulations are most meaningful when excited by proper mathematical models that accurately characterize the terrain. It
is important to understand the roughness indices that are used to judge the terrain profiles. The state-of-the-art in terrain
characterization and modeling is reviewed in this work for models including Power Spectral Density (PSD), Markov
Chains, Autoregressive Integrated Moving Average (ARIMA), Parametric Road Spectrum (PRS), Shifted Spatial Range
Spectrum (SSR), Direct Spectrum Estimation (DSE) and Transformed Direct Spectrum Estimation (TrDSE). The
applicability, limitations, and benefits of these models are assessed based on their effectiveness in capturing the
stochastic nature of the terrain being characterized. A discussion of terrain characterization usage to advance reliability
testing concludes this work as an example of the applicability of this technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The modern graphics processing unit (GPU) found in many off-the shelf personal computers is a very high
performance computing engine that often goes unutilized. The tremendous computing power coupled with
reasonable pricing has made the GPU a topic of interest in recent research. An application for such power would be
the solution to large systems of linear equations. Two popular solution domains are direct solution, via the LU
decomposition, and iterative solution, via a solver such as the Generalized Method of Residuals (GMRES). Our
research focuses on the acceleration of such processes, utilizing the latest in GPU technologies. We show
performance that exceeds that of a standard computer by an order of magnitude, thus significantly reducing the run
time of the numerous applications that depend on the solution of a set of linear equations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The implementation of massive network architectures like the Global Information Grid coupled with the introduction of
Internet Protocol Version 6 will change the way in which modeling and simulations will be conducted. Also, emergent
requirements for testing and training will dictate changes to current M&S practices and techniques. This paper presents
a survey of the future M&S requirements and technologies that must be explored to ensure that M&S is able to support
future applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A power supply might have passed the MIL standards, however retrofitting it into a system may still have compatibility
issues. To investigate such issues in power supplies fitted in a fighter aircraft, modeling and simulations of one such
power supply is carried out with the help of PSPICE. Conducted EMI performance of this power supply is analyzed and
various steps towards EMI mitigation, like use of snubber circuits and RC gate control, are studied with the help of
simulations. The DC bus is modeled by taking into account the transmission line parameters. When power supplies
having high frequency switching rates are connected across this DC bus, they generate switching reflections. The
interaction between these inter-connected on board power supplies produces time varying loads whose sudden rise and
fall times generate high frequency signals on the common DC bus voltage, which, in turn, causes increased EMI. An
experimental set-up is established in the laboratory to verify the simulation results. All the results of this study are
presented. It is observed that by proper modeling and with the help of simulation tools, the EMI issues can be studied at
the design stage itself which saves both time and costs, as there is no need to construct a prototype for EMI
investigations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present novel designs for all optical analog-to-digital converters simulated and realized in
photonic crystal platforms. The designs presented were implemented on both photonic bandgap based
structures as well as self collimation based structures. Numerical simulation results as well as fabrication results
are also included. Characterization results validate the designs presented for a functional all optical two bit
analog to digital converters in photonic crystals. The design presented can be further scaled to higher resolution
conversion as well as to no optical frequencies if necessary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.