PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Today's modern cable television network is bi-directional, it is constructed with a mixture of fiber optics and coaxial cables. This architecture is commonly referred to as a hybrid fiber coax (HFC) network. The architecture utilizes fiber optics to transport signals to and from small serving areas commonly referred to as fiber serving areas (FSAs). At points throughout the system the optical signals are transitioned to and from signals in the radio frequency (rf) spectrum of 5 to 750 MHz. The point where the transition takes place is commonly referred to as an optical node. From the optical node the coaxial network is used to transport signals to and from the end users. It is common practice to design the coaxial portion of the network with a downstream frequency bandwidth of 50 to 750 MHz, a return frequency bandwidth of 5 - 42 MHz is utilized to carry the signals from the end users to the headend (HE).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because the upstream radio frequency (RF) signals are combined in a hybrid fiber-coax (HFC) system, problems that do not normally affect the downstream signals are encountered. The first problem is a noise energy funneling architecture that allows interfering energy originating in one branch to degrade all the combined return rf signals, not just the signals originating on the branch with the problem. The second problem is the lack of any easy way to diagnose the source of the noise energy, especially when the noise source is intermittent. The third problem is the use of the 5 - 40 MHz frequency band where man-made interference is at a much higher level. Test methods that were used to characterize networks are presented, as well some of the sources of undesirable energy and their spectral characteristics. Problems caused by differences in the dynamic ranges at the cable to fiber interface are also discussed. Finally, the paper proposes a set of return network requirements and possible ways to achieve the network requirement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapid change in the telecommunications environment is forcing carriers to re-assess not only their service offering, but also their network management philosophy. The competitive carrier environment has taken away the luxury of throwing technology at a problem by using legacy and proprietary systems and architectures. A more flexible management environment is necessary to effectively gain, and maintain operating margins in the new market era. Competitive forces are driving change which gives carriers more choices than those that are available in legacy and standards-based solutions alone. However, creating an operational support system (OSS) with this gap between legacy and standards has become as dynamic as the services which it supports. A philosophy which helps to integrate the legacy and standards systems is domain management. Domain management relates to a specific service or market 'domain,'and its associated operational support requirements. It supports a companies definition of its business model, which drives the definition of each domain. It also attempts to maximize current investment while injecting new technology available in a practical approach. The following paragraphs offer an overview of legacy systems, standards-based philosophy, and the potential of domain management to help bridge the gap between the two types of systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the development of a wavelength division multiplexing (WDM) communication systems for high- speed multimedia information delivery applications. This system multiplexes NTSC video, RGB video, audio, and network data (ATM/OC-3, FDDI) onto a single fiber to provide multimedia communications between remote locations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video services to the home are among the driving applications for emerging broadband networks. For residential services to be viable, video quality must be comparable to broadcast video. Video compression technology has well defined standards for high quality video (MPEG). Suitable video delivery techniques are however still under investigation. We consider the problem of delivering constant quality video using variable bit rate encoding. A traffic model is proposed for three different encoding types (H.261, MPEG2: one and two layer). These models are suitable for either stored or real-time video. The statistical multiplexing efficiency of these video sources and call admission based on leaky bucket traffic parameters are evaluated. Two layer encoding is shown to have significantly better statistical multiplexing gains than one-layer video, when the network admits calls based on a leaky-bucket characterization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Consumers judged the quality of video images that had been compressed by an MPEG2 codec at the bit rates 3.0, 3.9, 5.3, and 8.3 Mb/s. The judgments were made in a standard testing laboratory. VHS and simulated cable analog systems also processed the same scenes for comparison. This study asked: (1) At what bit rate does MPEG2 video equal or exceed the quality of competing technologies such as cable TV and VHS? Answer: 3 Mb/s. (2) How much more are consumers willing to pay for MPEG2 compared to what they currently pay for cable TV? Answer $1 - $2. The answers to both questions come with many caveats. Further results: MPEG2 was rated higher than MPEG1 at the same bit rate, even without the use of 'B frames.' The rating difference was about a dollar. MPEG2 at 3.0 Mb/s is rated the same as MPEG1 at 3.9 Mb/s. Subjective quality improves only slowly as bit rate increases from 3.0 to 8.3 Mb/s. MPEG2 at 8.3 Mb/s was rated the same as the original, uncompressed signal. Individual test scenes were rated differently, independent of the coding system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss the CNR performance and optical link-budget optimization in 1550-nm EDFA-based video lightwave transmission systems for video trunking applications. The operating point of the in-line EDFAs was determined by balancing the requirement for achieving a targeted CNR and a largest-possible link-budget. In addition, a 120-km multichannel AM-VSB/256-QAM video lightwave trunking system using two in-line EDFAs was demonstrated. At the optimum plus 3-dBm EDFA's input optical power, the 1550-nm AM/QAM video lightwave trunking system offers an AM CNR greater than 49-dB with CSO and CTB distortions less than minus 65- dBc as well as nearly error-free 256-QAM transmission. The overall system link-budget was greater than 35-dB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of a hybrid AM-VSB/BPSK optical fiber transmission system is presented. A BPSK modulated 2 MHz pseudo-random digital channel was substituted for one of the AM channels in a 60 channel CATV system and optically transmitted using a directly modulated analog 1.3 micrometer DFB laser. This substituted channel method does not increase the frequency bandwidth of the system. The intermodulation distortion effects were studied. No degradation of the AM channels was observed unless the modulation depth of the BPSK channel is increased to the point where laser clipping effects become significant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on the investigation of the feasibility of deploying low-cost uncooled Fabry-Perot (FP) lasers for upstream QPSK data transmission in hybrid fiber/coax (HFC) access networks. In addition, for a representative HFC network comprising of 480 users, we also present analytical results, based on traffic engineering techniques, to compute the maximum attainable throughput of the upstream HFC segment and estimates of the peak bit rate per user, if QPSK transmission is employed in the 5 - 40 MHz upstream frequency band.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A first generation ATM-HFC system is actual in standardization phase in IEEE 802.14 and DAVIC. Some important issues are still open e.g. frame format, burst size and preamble. This paper addresses these issues and suggests possible solutions. In addition, access methods are discussed and analyzed in terms of future upgradability and interoperability. The differences between access methods are analyzed for the physical layer, showing under which circumstances equal levels of performances can be obtained. At last, a case study of an HFC concept is described, performed in the framework of the ACTS project: ATHOC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increased demand for high speed data transmission in hybrid-fiber-coax (HFC) networks is putting more pressure to return transmission technology. Most of the current HFC network architectures are optimized for analogue TV signals in the forward path. The assumption is that the same architecture may not be the optimum for return signals. Although the main use for return path is still low bitrate data like Pay-per-View ordering information, network operators are planning and trying the use of high speed cable modems and interactive digital setups. This presentation gives an overview of the alternative solutions used in the reverse path architecture of a modern HFC network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Offering broadband services to residential users will in most cases mean using ATM (asynchronous transfer mode) based shared medium access networks which concentrate the users' traffic in the upstream direction (from the users' terminals towards the public network). Both HFC (hybrid fiber coax) and PON (passive optical network) based solutions need a medium access control (MAC) in order to schedule the upstream traffic merged from different users. Several approaches for MAC protocols have been published by researchers, research projects, and also the IEEE P802.14 draft working group. These different MAC protocols are compared in this paper with respect to the suitability of different mechanisms for the resource management needed by the different ATM traffic classes. A new approach to ATM access networks, integrating all traffic classes by using appropriate MAC mechanisms in the access network while still maintaining overall efficiency, is proposed. The optimum resource management strategy inside the network termination units (MAC endpoints) is shown as the result of a discussion of the different options which still allow for the targeted efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The IEEE 802.14 standard group is aimed at defining the physical and medium access control (MAC) layer protocols of a bi-directional cable TV network using hybrid fiber/coaxial (HFC) cables. Several MAC protocol proposals have been submitted to the 802.14 working group that has started the evaluation process in order to conceive a single MAC protocol satisfying all the HFC requirements. One can think of a MAC protocol as a collection of components each performing a certain number of functions. An HFC MAC protocol can be broken into the following set of components: ranging or acquisition process, frame format, support for higher layer traffic classes, bandwidth allocation, bandwidth request, contention resolution mechanism. Ranging is the phase during which the round trip delay time to the headend is calculated and the station synchronization to the downstream timing is performed. The frame format element of the MAC defines the upstream and downstream frames and describes their contents. If the MAC needs to provide support for ATM, it also needs to differentiate between different classes of traffic supported by ATM, such as constant bit rate (CBR), variable bit rate (VBR), and available bit rate (ABR). Bandwidth allocation represents an essential part of the MAC and controls the grating of requests at the headend. Finally, the contention resolution mechanism which is maybe the most important aspect of the MAC consists of a backoff phase and a retransmission phase. This paper examines two of the MAC elements mentioned above, namely the contention resolution and the bandwidth allocation mechanisms. Different solutions for each component are considered and evaluated. Performance is measured in terms of request delay, mean access delay and access delay probability distribution. Simulation results for configurations and scenarios of interest are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Demand for inter-activity drives the massive introduction of a return channel in CATV systems by means of the HFC architecture. Cost-effectiveness dictates sharing this return channel among many customers and the most suitable method is TDMA which can accommodate the high burstiness of the upstream traffic. The MAC protocol arbitrating the access to the time slots combines more than one access mechanisms but reservation ALOHA is the most promising method for the identification of busy stations. In this work a novel method allowing simultaneous reservations without previous symbol synchronization is presented. The simultaneous single symbol reservation (S3R) scheme greatly reduces the overhead for reservation sub-slots but also allows better and more predictable performance improving system utilization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Contemporary hybrid fiber coaxial (HFC) networks are capable of supporting a wide range of services including traditional analog video, telephony, digital video, and data services. Each service has unique performance or service requirements. This contribution examines transmission design for one such network. Pacific Bell's advanced communications network (ACN). The design methodology begins with a set of end to end service quality objectives. Network impairments, such as noise, distortion and delay, are allocated across the network elements using a set of standard network models. These models are a representative set of the actual field designs and bound the network operating parameters. Network components, headend equipment, and customer premises equipment are specified analytically or characterized empirically in relationship to the chosen impairment set. The component parameters are then included in analytical models to estimate overall network performance. In addition to the forward path transmission considerations examined by traditional coaxial network designers, other dimensions including power consumption, traffic demand, and message latency are taken into account. Analytical models are used to estimate the effects of multiple modulation schemes within the unified network. The variability introduced by on demand services such as telephony and interactive digital services changes the base computational domain from deterministic models to stochastic ones. These models are then used to set operating parameters at measurable points throughout the network for proof of performance prior to turn up, and for ongoing performance monitoring. For closure, empirical results are compared with model projections as a way of verifying and improving the predictive models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In upgrading the access network to be broadband capable, re- use of existing infrastructure is essential to manage the risk associated with this development. FTTCab (fiber to the cabinet) has been proposed as an architecture that can make this development economic to provide 12 Mbit/s to the customer and 2 Mbit/s back into the network. The FTTCab architecture uses an optical fiber overlay to an active node sited at the primary cross-connect point (PCP) in the copper access network. Frequency multiplexing allows the copper pair infrastructure to be re-used without changing the existing narrowband services. FTTCab is at the mid-point of a range of access topologies with respect to the siting of the DSL (digital subscriber loop) technology. The DSL modem can be sited at the home, curb, cabinet, or in the exchange to suit a range of distance/capacity requirements. This enables a simple revolution of the current network to FTTCab, and allows the architecture to be flexed to satisfy particular business needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current view of the fiber-based broadband access network is that it could basically be modeled into two target networks represented by the following architectures, the fiber to the curb, building, home (FTTC/B/H) -- also termed switched digital video (SDV) -- architecture, and the hybrid fiber coax (HFC) architecture. Both architectures support on-demand digital services. One way to distinguish between these two architectures is based on the digital modulation scheme. The SDV/FTTC architecture utilizes baseband digital modulation both in the fiber distribution and the point-to- point drop. Whereas, the HFC architecture is pass-band and utilizes digitally modulated (as well as analog modulated) subcarriers both on the fiber and the coax for distribution to customers. From a network modeling point of view, the distinction between these two architectures is fuzzy. A hybrid between the above two architectures represents other architectural advantages especially bandwidth utilization in the upstream direction. This paper describes this hybrid architecture and provides an evaluation of the different access network configuration scenarios based on an expanded version of the DAVIC reference models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DAVIC does not suggest any particular implementation strategy for STBs. Compliance is achieved by adhering to specified hardware and software interfaces and having the capability to execute a long list of 'DAVIC applications.' DAVIC consumer terminals have some mandatory interfaces and other optional interfaces. This paper discusses the requirements in an HCT to meet the requirements set forth in DAVIC 1.0. Also suggestions are made on how to implement certain aspects ignored by DAVIC, such as compatibility with existing consumer equipment and network realities. The network interface implementation at the A1 reference point is discussed, explaining the building blocks for the front- end, tuning, demodulation and error correction of the 3 major data paths. Further processing of the high speed forward data stream in the MPEG demux and separation of the media and control data is explained. The routing of the data from the S2, S3 and S4 paths and the hardware/processing capabilities required are enumerated. Different options on implementing a cost effective and tamper proof CA system is explored. Effective ways of providing media processing, MPEG audio and video decompression and graphics processing with minimum memory and processing cycle wastage is explained. Effective performance of 'DAVIC applications' requires a certain amount of processing power and memory size. Also different types of memory are required. Several interfaces are required on the consumer side to give enhanced viewing experience a digital interactive system can provide. At the same time existing televisions and other consumer equipment will have to be supported also. This means supporting digital television and digital VCRs and home PC connections will co-exist with legacy non cable ready TVs and analog VCRs. The design has to support capabilities to expand new features and capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on a broadband multiple access protocol for bi-directional hybrid fiber-coax (HFC) networks. Referred to here as the enhanced adaptive digital access protocol (ADAPt+TM), it builds upon earlier work to define a medium access control (MAC) protocol amenable to a multiple service environment supporting subscriber access in HFC networks with tree and branch topologies. ADAPt+ efficiently supports different access modes such as synchronous transfer mode (STM), asynchronous transfer mode (ATM), and variable length (VL) native data (e.g., IP, IPX). This enhanced protocol adapts to changing demands for a mix of circuit- and packet-mode applications, and efficiently allocates upstream and downstream bandwidth to isochronous and bursty traffic sources. This paper describes: ADAPt+ for upstream communication and multiplexing/demultiplexing for downstream communication; its applicability to STM, ATM and other native data applications; and performance attributes such as bandwidth efficiency and latency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Residential broadband access network technology based on asynchronous transfer modem (ATM) will soon reach commercial availability. The capabilities provided by ATM access network promise integrated services bandwidth available in excess of those provided by traditional twisted pair copper wire public telephone networks. ATM to the side of the home placed need quality of service capability closest to the subscriber allowing immediate support for Internet services and traditional voice telephony. Other services such as desktop video teleconferencing and enhanced server-based application support can be added as part of future evolution of the network. Additionally, advanced subscriber home networks can be supported easily. This paper presents an updated summary of the standardization efforts for the ATM over HFC definition work currently taking place in the ATM forum's residential broadband working group and the standards progress in the IEEE 802.14 cable TV media access control and physical protocol working group. This update is fundamental for establishing the foundation for delivering ATM-based integrated services via a cable TV network. An economic model for deploying multi-tiered services is presenting showing that a single-tier service is insufficient for a viable cable operator business. Finally, the use of an ATM based system lends itself well to various deployment scenarios of synchronous optical networks (SONET).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bursty transmission is characteristic of data traffic on networks of all sizes. ATM is well suited to handling this traffic in wide-area networks and is likely to be the technology of choice in interconnecting HFC networks covering metropolitan areas or portions of them. However, the flow control methods differ significantly: ATM flow control, as defined by the ATM Forum, is based on explicit rate feedback, while HFC networks for the most part rely on a request/grant mechanism for reserving quantities of data transmission, rather than rates. This paper examines problems that are likely to arise in interfacing the two types of flow control. It also considers operating an HFC system in conjunction with earlier open-loop ATM designs without feedback. Finally, it examines the use of HFC systems with other alternatives for ATM control, particularly the quantum control proposal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past CATV networks were used for broadcasting television signals. Since the deployment of fiber, the CATV networks are being upgraded to so-called hybrid fiber coax (HFC) networks, providing the possibility to use the return channel as well. The presence of such a channel challenges the operators to implement bi-directional services, such as telephony, Internet or interactive television. State-of-the- art products are in most cases ad hoc solutions implementing a single service on a dedicated system. This means that if a customer subscribes to a telephone service and a data service he ends up with two cable modems. In such a scenario also the number of interfaces at the head-end increases, so that the complexity of the total system increases for the operator. Although ATM is meant to be the multiplexing method to support all services, it is still not cheap enough to provide the wide variety of quality of services (like delay requirements for telephony in combination with efficiency and reliability requirements for data). In this paper a concept is presented that can deal with this problem in a cost effective way. The concept can be used to implement STM traffic (e.g. telephony) and ATM traffic in an efficient way. It is also shown that the system can be used for ATM traffic only so that it can be used as an intermediate step in the evolution of HFC networks to a full service network. Simulation results show a very good overall performance. This is partly due to the flexibility that is inherent to the system concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cable modems play an important role in turning the hybrid fiber coax (HFC) networks from pure broadcast video service to high-speed access networks. Many CATV companies and telephone companies are experimenting with high-speed data services over HFC. With today's technology, cable modems can easily run at a data rate of 10 Mbps or above. They allow subscribers fast access to on-line services and the Internet. A variety of cable modems have been developed and marketed by cable modem vendors. Selection of right cable modems for deployment in HFC access networks has become a nontrivial matter. Different HFC systems may require different types of cable modems. In this paper, we review the development of HFC systems and discuss data networking using approaches that include connectionless data networking and connection-oriented data networking systems. The system requirements for cable modems in terms of throughput, robustness, ease of operation, protocol efficiency, reliability, network management, and cost are addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High speed data (HSD) over hybrid fiber coax (HFC) holds the promise of up to 1000 times the speed now obtainable with conventional modems. This technology overcomes current bandwidth hurdles unleashing a plethora of new applications and services which provide exciting new revenue opportunities for service providers. Specific requirements to achieve a successful HSD over HFC system must, however, be accounted for in both the cable data system and the HFC plant. This paper presents critical requirements for a successful implementation of HSD over HFC based on recent deployment experiences. Topics include performance, bandwidth efficiency, operational simplicity, security, economics and return plant readiness issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To facilitate economic development of robust upstream communications systems over hybrid fiber coax (HFC) plants, the characteristics of the HFC plant must be captured and presented in a form meaningful to communications systems/modem designers. These channel characteristics, along with performance goals for information rate and error rates, must be used to drive the selection of the appropriate choice of modulation and error control coding. This paper presents a set of parameters which characterize the upstream HFC cable plant in which upstream signaling is expected to perform. Furthermore, quantitative values for these parameters are presented which represent a reasonable impairment level for designers to accommodate; doing so will guarantee reliable communications with wide availability. Singe-carrier frequency and time division multiple access (F/TDMA) provides a low risk, high capacity approach which offers the best choice for upstream modulation over the characterized HFC system. This modulation technique is a mature technology; easily accommodates proven, effective mitigation techniques for combating the HFC upstream channel impairments; and is bandwidth efficient in the HFC channel. Frequency agility of the carrier, with both QPSK and 16-QAM modulation, multiple symbol rates, and flexible forward error control (FEC) coding and frame and preamble structure are advocated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TCP/UDP/IP data transport over hybrid fiber coaxial cable (HFC) networks for Internet or enterprise service requires new approaches for scaling, provisioning, authentication, service differentiation and quality of service. MAC layer bridging alone at the head end will fail to provide scaling, conditional access, and quality of service. Additionally, the half duplex, shared nature of the HFC network and the need for multiple return paths per forward path, will encourage the use of packet layer routing in the head end. Data/cable industry suppliers have been concentrating on physical and link layer issues such as modulation, forward error correction and media access control (MAC) protocols. Less thought has been given to system software issues which are crucial to scaling residential broadband networks. By scaling, we mean the capability to provision, diagnose, manage and ensure expected performance when thousands or millions of subscribers are attached. This paper describes some software scaling issues and discusses cable DHCP and virtual dialup as examples of software scaling solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the current state of network signaling for ATM broadband SVC call control, and the direction of evolution of the current work in the ITU, ANSI and the ATM Forum activities. The most recent issues of the signaling protocols will support enhancements to the basic call control such as point-to-multipoint network connections, variable bit rate, look-ahead, and negotiation/modification of connections. Future activities are aimed at the areas of extensions to basic connection control, adaptation of narrowband procedures such as echo control and mobility, and linking of ATM signaling with higher level application control efforts such as DSM CC and TINA. These extensions to the signaling protocol will allow signaling to support basic connection services with a variety of different connection types. More complex multimedia services are likely to be provided using service-specific control protocols, where basic signaling provides a component part for the control of network connections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The end-to-end bit-error performance of 64-QAM transmission using a hybrid fiber-coax system test bed designed for delivery of 77 AM-VSB and 25 64-QAM video channels covering the frequency band 54 - 702 MHz was studied. Error-free QAM transmission was achieved with the use of modulated analog carriers and Reed-Solomon (204, 188) error-correction coding indicating that in hybrid fiber-coax systems transport of AM-VSB and 64-QAM signals can be realized maintaining both analog and digital performance requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a typical fiber optic network design, laser relative intensity noise (RIN) and photodiode shot noise are device- specific and have to be considered in the network system design. Nonlinear distortions (NLD) in a laser diode, such as laser clipping, can limit system performance. In order to get the best carrier to noise ratio (CNR) and carrier to interference (C/I) out of a laser diode, it is common practice in the cable industry to run the diode into a limited amount of clipping. Based on past research, there is a basic limit to the number of cable video channels and the depth of modulation that can be put on a laser diode before impairments distort the video to such an extent as to render it unacceptable. This project developed a laser diode clipping model that is used to determine and to simulate the clipping effect in a laser diode. The model will demonstrate the effects of clipping on cable networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fiber-to-the-home (FTTH) networks have always been the ultimate solution to a future-proof, broadband access- network. In the past, the major obstacle to implementing FTTH has been cost, coupled with the lack of symmetrical, bandwidth intensive applications. In recent years, however, there have been several technological, commercial, and regulatory developments that brighten the outlook on FTTH. This paper outlines these changes, and presents architectures that may be deployed in the near future. First, the current motivations driving FTTH, including high bandwidth, high reliability, and low operations cost, are discussed. This is then be followed by a review of various FTTH systems. This emphasizes key developments and trends that are influencing present system configurations. After this background, some recent technology developments that impact FTTH are presented. These include high-temperature loop lasers, battery technology, video compression, and ATM transport. It is shown how oak of these advances makes FTTH more attainable. In general, a FTTH system can be characterized by its service capability, network topology, and signal format. By specifying different combinations of these characteristics, one can generate many different systems which can then be compared to identify optimal designs. From these considerations, for distinct FTTH networks, including TDM-PON, FDM-PON, dense-WDM-PON, and FTTC with fiber drops are described and analyzed in view of their future potential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optical wide-band FM modulation scheme which increases the optical power budget of fiber-optic AM video signal transmission systems is described and its improved transmission performance is demonstrated. Expanding of the power budget of AM video signal transmission allows us to create a passive double star (PDS) based fiber-optic access network platform that supports telephony and video distribution services.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is motivated by the advantages of having transactional value added services (i.e. interactive multimedia ones) instead of solely providing information on electronic supports (i.e. CD-ROMs). Investment on network resources provision, or even on adding infrastructure to existing networks, seems to be the most cost-effective solution for the establishment of this kind of services. Support for multimedia data services requires a full characterization of both the forward and return channels (usually highly asymmetric) for one or several users, so that proper resources can be allocated or efficient new infrastructures can be designed. This paper firstly describes a fully interactive and general purpose multimedia client/server application (a currently working one), that provides to the user a common interface to remotely access heterogenous databases. Secondly, it presents the test architecture and configuration established to obtain a representative number of traffic measures that a single instance of this multimedia application generates over a TCP network. Data is then analyzed to extract the QoS traffic parameters that will define network capabilities for both the forward and return communication channels, first for a single user, and then to optimize a multi-user environment. Next, a methodology for the accurate characterization of the multi-user situation is presented. Finally, arguments for extrapolation of the results to most applications currently running over Internet, are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the airborne on-board local area network (LAN) architectures and technologies for implementation of advanced aircraft multimedia information distribution networks. The avionics functional requirements for multimedia transmission are described. The existing aircraft multimedia transmission networks, emerging technologies, and some aspects of future direction are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Concentration in the telephony network has traditionally been done at the local digital switch and seldom done in the distribution portion of the network. With the availability of integrated digital loop carrier systems conforming to TR- 303 in the U.S. and V5.2 internationally, service providers have the choice of concentrating at the switch or in the distribution network. Concentration in the distribution portion of the network is particularly attractive in a hybrid fiber coax (HFC) network because it tends to drive the HFC telephony common equipment cost lower. This paper compares the different options and analyzes the benefits of concentration in the distribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the hybrid fiber-coax (HFC) architecture, the coax is a shared medium to which the network interface units (NIUs)
of different end-users are attached for accessing a diversity of network services. To analyze the traffic carrying capacities of
the coax, a C-i--i- object-oriented simulation tool has been developed. This paper reports on the use of this tool and analytical
techniques in the investigation of several key traffic issues:
. use of call packing to improve upstream bandwidth efficiency . impact of proximity restriction associated with frequency hopping on blocking
S design of time-slot assignment algorithms . downstream load balancing . effectof call retries
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the HomeWorker network and the results from a pilot study being undertaken to determine the performance of the system and its impact on working practice. The HomeWorker network is based on existing cable television infrastructure and provides a local area network over a metropolitan area. It is a CSMA/CD bus based network working at 500 kbps, and we demonstrate that it is capable of providing such a sustained data transfer rate in practical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new design for a self-routing, space-division fast packet switch for ATM B-ISDN. This is an expansion switch based on binary expansion, concentration, and combination of neighboring blocks of packets. Internal buffers are needed for local synchronization and packet buffering for eventualities of path collision and/or next stage buffer full. An expansion network such as the UDEL switch provides multiple paths for any input/output pair. These multiple paths help to alleviate many common problems including head-of-line (HOL), internal path conflicts, and output collisions. Our proposal provides 10-10 packet drop rate between two stages under random uniform traffic with a unique 3-dimensional arrangement of printed- circuit boards. Batcher-banyans or similar small switches may be used as the last stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have carried out a paper feasibility study of the implementation of most common packet switching cores (crossbar, Batcher-banyan, time-division shared bus, and token ring) using the superconductor rapid single flux quantum (RSFQ) digital technology. According to our estimates, the best performance-to-complexity ratio may be obtained for the Batcher-banyan network. For example, a 128 by 128 switching core with self-routing (but without address translation, contention resolution, and broadcast features), consisting of about 180,000 Josephson junctions with the internal clock frequency of 60 GHz could handle a workload of 7.5 Tbps. This core could fit on a single 1 cm by 1 cm chip and dissipate as low as 45 mW. The estimated parameters are achievable using a simple 1.5-micrometer niobium- trilayer technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The tiny tera is an all-CMOS 320 Gbps, input-queued ATM switch suitable for non-ATM applications such as the core of an Internet router. The tiny tera efficiently supports both unicast and multicast traffic. Instead of using optical switching technology, we achieve a high switching-bandwidth by using less expensive and proven CMOS technology. Because of limitations in memory and interconnection bandwidths, we believe that to achieve such a high-bandwidth switch requires an innovative architecture. By using virtual output queuing (VOQ) and novel scheduling algorithms, the tiny tera will achieve a maximum throughput close to 100% without the need for internal speedup.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
VBR traffics with their bursty nature are still troublesome for ATM networks. The problem can be dealt with in call admission and bandwidth allocation stages and later, when the connection is established, by appropriate flow control schemes and buffer allocation mechanisms. Accommodating the large bursts in extra buffers at the inputs of switch fabric, during the overflow periods in the internal buffers, can be part of the solution given to this problem. Adding the input buffers is more preferable than expanding the internal memory because the input buffers are less expensive and can be used in bulk, while the internal buffers are more complex and expensive and not easily expandable. In this paper we consider a general model for switches with input buffers which consists of three parts: input buffer, I/O flow controller, and output (internal) buffer. In this way we isolate the switching mechanism and the back-pressure mechanism required in this kind of switches. We present different architectures for the I/O flow controller section and discuss the advantage and disadvantages of each model. We also address the QoS requirements of the individual connections in the input buffered switches by providing a specific architecture for the input buffer which unlike the traditional FIFO buffers allows scheduling the service among the cells in the input buffer without extra complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a modeling approach to performance evaluation of a shared buffer switching element is described, based on the well-known fluid model of producers and consumers (PC fluid model). A procedure is outlined that leads to a suitable characterization of some typical parameters of the producer and consumer fluid model, making it representative of the shared buffer switch. Simulation analysis is used to investigate the relationships between the behavior of the shared buffer switching element and of the PC fluid model. In the paper it is shown that by means of a suitable fitting of one parameter characterizing the PC fluid model, it is possible to make it representative of the shared buffer in the region of interest for ATM applications. This in spite of the actual operating differences between the real system and the PC fluid model. Numerical results regarding cell loss probability performance and dimensioning of 4 by 4 and 8 by 8 switches are presented and suitably discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This report covers the technological aspects of high performance parallel interface-6400 (HIPPI-6400), a forthcoming upgrade to the existing HIPPI protocol suite. The report concentrates on the technological advancements and situations that occur in a 6400 Megabit/s network and the solutions produced by the ANSI X3T11 committee. The first section of the report introduces HIPPI-6400 to familiarize the reader with basic concepts and lay ground work for future sections. Section 2 analyzes the transmission control link-layer embedded in HIPPI-6400 hardware. Section 3 describes the scheduled transfer protocol and connection setup that allows the user to partake of the entire 6400 Mbit bandwidth; followed by section 4 that describes the signaling interface. Section 5 provides an overview of switching requirements and constructs. Finally, section 6 concludes the paper, describes works in progress, including simulation, and points to further reading.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we develop an improved optimization algorithm based on genetic algorithm (GA) approach for the bandwidth allocation of ATM networks. The ATM switches can be connected with multiples of DS3 trunks via digital cross connect systems (DCS). One of the advantages of DCS is its ability to reconfigure a customer network dynamically. We utilize this advantage in the design and dynamic reconfiguration of ATM networks. The problem is formulated as a network optimization problem where a congestion measure based on the average packet delay is minimized, subject to capacity constraints posed by the underlying facility trunks. We choose the traffic routing on the express pipes and the allocation of the bandwidth on these pipes as the variables in this problem. The previous GA algorithm is not practical because (1) the number of the traffic distribution patterns is huge, and (2) the values of offered traffic are continuous. A new representation of the chromosome, Net- Chro, and the reproduction operator are presented. We show that the previous algorithm cannot guarantee full usage of trunk capacities in the solutions it generates. We also discuss open-loop control to overcome the congestion caused by a trunk failure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High speed networks must provide constant rate services to handle applications like telephony, audio and video, multimedia services, and real-time control. An important issue in providing constant rate services is scaleability. In this paper we propose a scaleable approach, called the reduced control complexity network, to providing constant rate service. Our approach uses an asynchronous network that guarantees lossless transport of constant rate data. First, we consider the problem of providing FIFO order, lossless and fault-free transport of one constant rate connection using asynchronous network elements. We accurately characterize the behavior of the traffic as it goes through the network. We find that the minimum buffer size required to guarantee lossless transport grows nearly linearly with the number of network elements traversed by the connection. We propose an asynchronous switch element in which each connection is allocated a logically separate buffer space. We use a non-work conserving scheduling policy to guarantee the service requirement of all connections. This simplifies the problem of reasoning about network behavior. We use a static, table-driven scheduler that can be easily implemented to work at high speeds. Finally, we address the problem of generating the schedule table to meet the service rates of connections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, the problem of multiplexing VBR and ABR traffic is considered. VBR is uncontrollable, while ABR is provided the remaining service capacity in a controllable fashion. While in a given state, the VBR source is supposed to transmit at a fixed rate; the VBR time scale is then defined in terms of the time constant associated with the rate of change of the rate of the VBR source. ABR sources are assumed to be at some distance from the multiplexing network node; the network speed defines the network time scale which determines the distance (in network slots) between the ABR sources and the node. Finally, the minimum tolerable ABR rate (or the speed at which the ABR source can generate data units to be cellized) define the source time scale. While it is known that the increased network transmission speed (decreased network time scale) reduces the effectiveness of feed-back based adaptive rate control mechanisms due to the increased bandwidth propagation delay product, the positive impact of the VBR and ABR time scales has not been considered in the past. In this work a tractable analytical model is considered and the impact of the three time scales is investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wireless ATM networks are expected to support a variety of services with different bit-rates and quality-of-service (QoS) requirements. A major challenge for these networks is the design of the multiple access scheme. This paper analyzes both TDMA and CDMA techniques as possibilities for multiple access in wireless ATM networks. Our analysis is based on the capabilities of these two techniques to support various services and meet their QoS requirements. For each technique, several alternative protocols are described and their advantages and disadvantages are discussed. In addition, a hybrid CDMA/TDMA technique is introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are on the threshold of witnessing an explosion of portable and mobile terminals capable of sending and receiving multimedia traffic. Currently, the standard being worked out by the IEEE 802.11 committee to support wireless connectivity in the local area network appears to be the most promising one. IEEE 802.11 protocols support both scheduling and random access techniques operating simultaneously, called point coordination function (PCF) and distributed coordination function (DCF), respectively. In this paper, we study the interactions between PCF and DCF when voice and asynchronous data traffic needs to be supported. We investigate the dimensioning problems of various parameters, and provide the general rule of thumb of the default values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a dual ring metropolitan area network (MAN) employing spatial reuse is proposed for the interconnection of base stations in distributed control wireless personal communication networks (PCNs). A preemptive priority mechanism is introduced that can support effectively voice/data personal communication services. The operation of the proposed mechanism does not require the wastage of channel bandwidth and can preempt the transmission of low priority packets (such as data packets) for as long as there are active high priority classes (such as voice traffic) in the system. In this way, the effect of low priority traffic on high priority traffic can be minimized and the effect of station location on performance can be significantly reduced. Furthermore, a fair share of the transmission bandwidth among similar priority traffic sources can be achieved. Simulation results are used to investigate the effectiveness of the proposed mechanism in the presence of two priority classes and the system performance under voice/data transmissions is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.