Kirjojen hintavertailu. Mukana 12 371 195 kirjaa ja 12 kauppaa.

Kirjahaku

Etsi kirjoja tekijän nimen, kirjan nimen tai ISBN:n perusteella.

72 kirjaa tekijältä Patrick Stakem

Floating Point Computation

Floating Point Computation

Patrick Stakem

Independently Published
2016
nidottu
This book discusses the floating point data format in computation. It is somewhat architecture-neutral, but does restrict the discussion to binary computation in digital computers based on software and microelectronics technology. To understand why we need the complexity of floating point for scientific, engineering, and financial calculations, we need to review number systems, integer calculations in binary and decimal, and other representations systems, as well as the concept of negative numbers and zero. This work contains a broad list of floating point units and software packages.Both software and hardware approaches are discussed for 8-bit to 64-bit integer machines. The IEEE standard for floating point is discussed, as well as the previous mainframe era standards. A glossary and list of references are included.
Fort Cumberland, Global War in the Appalachians: A Resource Guide
In 1755, Fort Cumberland was at the cusp of three empires: the British, the French, and the Iroquois. It was the westernmost outpost of the British Empire in North America. Built at the confluence of Will's Creek and the Potomac by Virginia, North Carolina, and Maryland Militia, the fort became untenable after the Braddock defeat, and the western boundary of Empire was pulled back to the safety of Fort Frederick. West of the fort was disputed territory, leading into New France. The Native American peoples wanted both the French and the British to go home. They began to organize into large federations of tribes to better deal with the invaders from across the seas. Fort Cumberland was attached by Indian forces, but relieved. It saw no action in the Revolutionary War, but served as the staging area for troops deployed under Washington in the Whiskey Rebellion in Western Pennsylvania. This book has an extensive set of references to material relevant to the history, construction, and use of Fort Cumberland. It outlines the historical context of the Fort.
Down the 'crick: the Georges Creek Valley of Western Maryland
This is a work about the Georges Creek Valley in Allegany County, Western Maryland. The Georges Creek Valley is defined by Dan's Mountain to the east, and Savage Mountain to the West, part of the Appalachian range. Portions of Savage Mountain form the Eastern Continental Divide, separating watersheds draining to the Ohio River and those draining to the Potomac River. The history of the settlement of the Georges Creek Valley is the history of coal. George Washington was familiar with the area from his various trips in the wilderness. Once populated entirely by Native Americans, the region was settled by the English, with families from Scotland, Wales, and Ireland.Besides coal, a pioneering iron furnace was built at Lonaconing, which drove the introduction of rail transportation in the region. Where George's Creek meets the Potomac, the C&O Canal was slated to pass by.
Iron Manufacturing in 19th Century Western Maryland

Iron Manufacturing in 19th Century Western Maryland

Patrick Stakem

Independently Published
2017
nidottu
This book expands on previous works with new material, and discusses a specific topic of the Industrial Revolution in Western Maryland, the iron-making Industry. Starting around 1837, and ending early in the 20th century, the rich natural resources of the western portion of Maryland were used to produce iron, a necessary building block of the Industrial Revolution. By the 1870's Maryland was 5th in the Nation in iron production, and the facility at Mount Savage had rolled the first iron rail in the United States. The facility at Mount Savage, and the earlier one at Lonaconing were cutting-edge, state-of-the-art high technology research, development, and production centers. Essential Patents were issued. Mount Savage was a who's who of industrialization, invention, and technology vital to the nation. In the end, they missed producing the first true steel in the United States, probably by a few months. There were two major iron manufacturing sites in Western Maryland, both in Allegany County. Lonaconing was the first, and served as a model for the later Mount Savage site. Both were blessed by abundant supplies of raw materials. Both were handicapped by being located in the middle of nowhere. They addressed this issue by building transportation systems involving roads and railroads. Lonaconing was not successful in their timing, but Mount Savage was. By the time the railroad from Lonaconing was built, the furnace was out of production, and coal became the major commodity being shipped. Mount Savage not only built the first iron rails produced in the United States, they built a railroad with their rails to meet the B&O railhead at Cumberland. They went on to sell rail to the B&O so that road didn't have to keep importing it from England. Mount Savage went on to be a manufacturer of locomotives, producing maybe a hundred of their sturdy iron-workhorses. Lonaconing and Mount Savage both lie along Maryland Route 36, some 14 miles apart.
Personal Robots

Personal Robots

Patrick Stakem

Independently Published
2017
nidottu
Personal Robots of the 1980's inspired hopes for the future. This was triggered by the R2D2 Robot of the Star War series, itself based on the three service droids of the early Science Fiction movie, Silent Running. At the same time, personal computers were emerging as affordable and easier to use. The excitement and the technology reached a tipping point. Before this time, robotics mainly meant large hydraulic units that manufactured cars. Now it came to mean personal companions. The expectations were limitless. It took, as it always does, longer than we thought. The initial units were termed pc's on pc's - personal computers on push carts by Nolan Bushnell. Robots up to this time built cars on factory assembly lines.Now, people were building them at home, using whatever level of technology was available. Devices succh as the Roomba vacuum cleaner and robotic lawn mowers emerged. But, the miniaturization of the compute elements and the sensors was not there yet. Remotely controlled (tele-robotic) Battlebots fought in arenas. It is far easier to get a working robot put together at home now, with most of the pieces available off the shelf, and inexpensive. Mobility platforms, including flight platforms, small embedded computer such as the Raspberry Pi and Arduino, and a full spectrum of small inexpensive MEMS sensors are widely available. But the pioneering work of at home robot builders made this all possible. It was an exciting time, and its getting better.
Microprocessors in Space

Microprocessors in Space

Patrick Stakem

Independently Published
2017
nidottu
This book discusses .the use of microprocessors in space missions, from the earliest 4-bit machines to the most current 64-bit implementations. It covers the transition from monolithic processors with extensive glue logic, to IP cores instantiated in FPGA's. It gives the high-lights of the microprocessors sent and being sent into space, and the problems of sustaining their operations there. Microprocessors orbit the earth, sit on other planets, and have left the Solar system into interstellar space. They are the key components for spacecraft autonomy, and for collecting, storing, and returning the volumes of information that we receive from off-planet sources. Spacecraft microprocessors are a special subset of embedded computers. Most spacecraft include 10's ro 100's of processors, doing tasks such as attitude and orbit control, power monitoring and control, telemetry formatting and command handling, data storage management, and instrument control. Without these microprocessors, the amount that we know about our neighboring planets and the intervening space would be vastly limited. Early flight computers were custom designs, but cost and performance issues have driven the development of variants of commercial chips. Aerospace applications are usually classic embedded applications. Space applications are rather limited in number, and, until recently, almost exclusively meant NASA, ESA, or some other government agency. Flight systems electronics usually require MIL-STD-883b, Class-S, radiation-hard (total dose), SEU-tolerant parts. Specific issues of radiation tolerance are disucssed. Class-S parts are specifically for space-flight use. Because of the need for qualifying the parts for space, the state-of-the-art in spaceborne electronics usually lags that of the terrestrial commercial parts by 5 years. The new Cubesat concept bring the idea of a personal satellite within the reach of University Programs, and even for some individuals.Processors used in aerospace applications, as any semiconductor-based electronics, need to meet stringent selection, screening, packaging and testing requirements, and characterizations because of the unique environment. Most aerospace electronics, and the whole understanding of radiation effects, were driven by the cold war defense buildup from the 1960's through the 1980's. This era was characterized by the function-at-any-cost, melt-before-fail design philosophy. In the 1990, the byword was COTS -- use of Commercial, Off-The-Shelf products. Thus, instead of custom, proprietary processor architecture's, we are now seeing the production of specialized products derived from commercial lines. In the era of decreasing markets, the cost of entry, and of maintaining presence in this tiny market niche, are prohibitively high for many companies.
Mainframes, Computing on Big Iron

Mainframes, Computing on Big Iron

Patrick Stakem

Independently Published
2016
nidottu
This book covers the topic of mainframe computers, Big Iron, the room-sized units that dominated and defined computing in the 1950's and 1960's. The coverage is of efforts mainly in the United States, although significant efforts in the U.K., Germany, and others were also involved. Coverage is given for IBM and the 7 dwarfs, Burroughs, Control Data, General Electric, Hineywell, NCR, RCA, and Univac. There is also coverage of machines from Bendix, DEC, Philco, Sperry-Rand, and Sylvania.The predecessor architectures of Charles Babbage and his Differential Engine and Analytical Engine are discussed, as well as the mostly one-off predecessors Colossus, Eniac, Edvac, Whirlwind, ASCC, Sage, and Illiac-IV, How did we get where we are? Initially computers were big, unique, heavy mainframes with a dedicated priesthood of programmers and system engineers to keep them running. They were enshrined in specially air conditioned rooms with raised floor and access control. They ran one job at a time, taking punched cards as input, and producing reams of wide green-striped paper output. Data were collected on reels of magnetic tape, or large trays of punched cards. Access to these very expensive resources was necessarily limited. Computing was hard, but the advantages were obvious - we could collect and crunch data like never before, and compute things that would have worn out our slide rules. The book is focused mostly on computers that the author had experience with, although it does cover some of the one-off predecessors that lead to the mainframe industry of the 1960's. Thus, this book is not comprehensive. It probably missed your favorite. Not every machine from every manufacturer is discussed.Computers were built for one of two purposes, business accounting, or scientific calculations. There was also research in the fledging area of Computer Science, an area not yet well defined. The computers used peripherals from the Unit Record equipment, designed for business data processing. Data were typed on cards, sorted, and printed mechanically. This was a major improvements over the manual method. Herman Hollerith figured this out, and improved the processing of the U. S. Census in 1890. This took 1 year, as opposed to 8 years for the previous census. Hollerith set up a company based in Georgetown (part of the District of Columbia) on 29th street to manufacturing punched card equipment. There is a plaque on the building which housed the Tabulating Machine Company, later know as IBM. At the same time, business and science were both using mechanical calculators to handle computations. These were little changed over a hundred years or so. The technology base changed from mechanical to relay to tube, and things got faster. The arithmetic system changed from decimal to binary, because the switching elements in electronics was two-state. The next step was to put a "computer" between the card reader and the printer, and actually crunch the data.Then, a better idea evolved. Most of the time, the "big iron" was not computing, it was waiting. So, if we could devise a way to profitably use the idle time, we would increase the efficiency of the facility. This lead to the concept of time-sharing. There was a control program whose job it was to juggle the resources so that a useful program was always running. This came along about the time that remote terminals were hooked to the mainframe, to allow access from multiple, different locations. In a sense, the computer facility was virtualized; each user saw his very own machine (well, for limited periods of time, anyway). If the overhead of switching among users was not too great, the scheme worked.This evolved into a "client-server" architecture, in which the remote clients had some compute and storage capability of their own, but still relied on the big server.And, in the background, something else amazing was happening. Mainframes were built from relays and vacuum tubes, magnetic co
Multicore Computer Architectures

Multicore Computer Architectures

Patrick Stakem

Independently Published
2017
nidottu
This book gives an overview of Multicore architectures, how they derive from multiprocessors, and illustrates the new applications they enable. A multicore processor has multiple cpu and memory elements in a single chip. Being on a single chip reduces the communications times between elements, and allows for multiprocessing. Advances in microelectronics fabrication techniques lead to the implementation of multicores for desktop and server machines around 2007. It was becoming increasingly difficult to increase clock speeds, so the obvious approach was to turn to parallelism. Currently, in this market, quad-core, 6-core, and 8-core chips are available. Besides additional cpu's, additional on-chip memory must be added, usually in the form of memory caches, to keep the processors fed with instructions and data. There is no inherent difference in multicore architectures and multiprocessing with single core chips, except in the speed of communications. The standard interconnect technologies used in multiprocessing and clustering are applied to inter-core communications. Multicore technology is mainstream, and enables a vast application space.
Architecture of Massively Parallel Microprocessor Systems
By the 1990's, it was becoming increasingly obvious that Massively Parallel Microprocessor-based Systems (MPMS) were becoming significant new forces in the marketplace, as well as a design approach of great importance. There is no one good source that discusses the architecture of MPMS. No one text gives the overall view of MPMS as a design philosophy, as a market force, and as a technology driver. Thus, I took on the thankless task of putting together this set of information. It is important to realize that in a rapidly moving, trendy area such as MPMS, by the time information is published, it is probably obsolete. By the time a book is published, it is probably only of historical significance. This book is intended for the hardware or software practitioner to use as an introduction to the subject. It assumes that the reader know something about the internals of computer systems, architecture, and instruction execution. It would be relevant for an advanced undergraduate or graduate level course in computer design or architecture. It discusses the chip level of MPMS, and looks at the design trade-offs at the systems level.This document covers the field of MPMS. This is a subset of the field of Massively Parallel Computers. Although this variety of computer has been around for a long time, it only started to make an impact on the computer industry in the 1990's, as an alternative to supercomputers.The goal of this document is to give the reader an introductory look at the fundamentals of MPMS design, to allow the reader to understand the trade-offs, limitations, speed, cost, complexity, and architectures. The reader will be shown the history and the trends of the technology of this rapidly moving field. To achieve these goals, we'll review the basics and background of the technology, to understand where the trade-offs are. We'll then look at real-world design examples to see how the trade-offs were made. It is essential to realize that in MPMS technology, as in many cutting edge endeavors, there are no wrong answers in the marketplace, but a multitude of right ones. The wrong answers either never make it to the market, or don't last long there.This is not a source for designers, because the level of detail presented is not sufficient. However, it will be useful for engineers and engineering managers that must make use of this technology in systems. They need to know the capabilities and limitations of this important field, to be able to apply the technology in their particular domains of expertise.MPMS is a rapidly evolving field. Software has not begun to catch up with the processors. Good software tools to develop, debug, and maintain MPMS are just emerging. MPMS is becoming mainstream.In many cases we'll see decisions made that were not influenced totally by the technological issues, but mainly by marketing considerations. To the design engineer, this is heresy, but in the cold, cruel world, this is economic survival. Some companies are the pioneers at the "bleeding edge" of technology development; others prefer to hold back and address mature markets. As Nolan Bushnell says, "The Pioneers are the ones with the arrows in them".
Lonaconing Residency Iron Technology & the Railroad

Lonaconing Residency Iron Technology & the Railroad

Patrick Stakem

Independently Published
2017
nidottu
In the early 19th century, a 14-foot thick seam of bituminous coal referred to historically as "The Big Vein" was discovered in the Georges Creek Valley in Western Maryland. This coal region would become famous for its clean-burning low sulfur content that made it ideal for powering ocean steamers, river boats, locomotives, and steam mills, and machines shops. By 1850, almost 30 coal companies would be mining the Georges Creek Coal, producing over 60 million tons of coal between 1854 to 1891, with 5,000 men working underground. In the census of 1860, over 90% of the miners could read and write.The Town of Lonaconing was located centrally in the Georges Creek Valley, between Frostburg at the north, and Westernport at the south. Both towns at the extremes had rail junctions. There were plans to extend the C&O canal through Westernport. Lonaconing became the largest among the dozen or so towns along the Georges Creek, serving as a manufacturing center, a home for companies and miners, and a major retail center. At one time, residents had their choice of three rail passenger services, serving the town.When it was founded, Lonaconing was a model of Industrial Feudalism. Initially the workmen came from Wales, and, until recently, church services were conducted in Welsh.This is the story of the extraordinary men and company's who put together a small industrial empire in the middle of the woods in Western Maryland. They dug iron ore and coal, built a railroad, and formed towns and organizations that exist to this day. The iron furnace is preserved in a City Park, and the Silk Mill survives. Lonaconing now hosts a branch of the county library for the Georges Creek region.The author's ancestors came to mine coal and live in Lonaconing from Ireland during the Civil War.
Cubesat Engineering

Cubesat Engineering

Patrick Stakem

Independently Published
2017
nidottu
This book is an introduction to Cubesats, those popular and relatively inexpensive modular spacecraft that are upending the aerospace world. They have been built and deployed by colleges and Universities around the world, as well as high schools and elementary schools, even individuals. This is because Cubesats are modular, standard, and relatively low cost. The expensive part is the launch, but that is addressed by launch fixtures compatible with essentially ever launch on the planet. Although you may not have much of a choice in the orbit.At Capitol Technology University, where the author teaches, there is an ongoing Cubesat Project that will receive a free launch from NASA in late 2017, based on an open competition.Student Cubesat Projects are usually open source, may be world-wide in scope, and collaborative.At the same time, professionals in aerospace have not failed to consider the Cubesat architecture as an alternative for small-sat missions. This can reduce costs by one or two orders of magnitude. There are Cubesats on the International Space Station, and these can be returned to Earth on a resupply mission. There is a large "cottage industry' developed around the Cubesat architecture, addressing "professional" projects with space-rated hardware. NASA itself has developed Cubesat hardware (Pi-Sat) and Software (cfs).Cubesats are modular, built to a standard, and mostly open-source. The downside is, approximately 50% of Cubesat missions fail. We hope to point out some approaches to improve this. If you define and implement your own Cubesat mission, or work as a team member on a larger project, this book presents and points to information that will be valuable. Even if you never get your own Cubesat to orbit, you can be a valuable addition to a Cubesat or larger aerospace project. Shortly, two NASA Cubesats will be heading to Mars. The unique Cubesat architecture introduces a new Paradigm for exploring the many elements of our Solar System. Best of luck on your mission.
Interplanetary Cubesats

Interplanetary Cubesats

Patrick Stakem

Independently Published
2017
nidottu
This book discusses the application of Cubesats in the exploration of our solar systems. Including the Sun, the eight primary planets and Pluto, many moons, the asteroid belt, comets, and the ring systems of the four gas giants, there is a lot to explore. Although the planets (and Pluto) have been visited by spacecraft, Earth's moon has been somewhat explored, and many of the other planets' moons have been imaged, there is a lot of "filling in the blanks" to be done. Here we examine the application of swarms of small independent spacecraft to take on this role. Some of the enabling technology's for cooperating swarms is examined. Almost every Cubesat sent into space to this point has gone into Earth orbit, and is either there still, or has reentered the atmosphere. It's a big solar system, and there's a lot we don't know about it. Additionally, all Cubesats have launched as ride-along payloads. There are two approaches for using Cubesats for exploration away from Earth. One uses the demonstrated technology of solar sailing, and missions using this approach are being implemented. Another uses a large carrier-mothership, loaded with hundreds or Cubesats. This is sent to a destination. achieves orbit, and dispenses the Cubesats, providing a communications link with Earth. JPL is postulating this type of mission in the 2020's. They baseline a dormant cruise duration of 100-2200 days, followed by a Cubesat life of 1-7 days. Prior to that, the most likely scenario is a traditional exploration mission with some tag-along Cubesats. The next step beyond that is to make a swarm of Cubesats the primary payload.
Cubesat Operations: How to fly a Cubesat

Cubesat Operations: How to fly a Cubesat

Patrick Stakem

Independently Published
2017
nidottu
This book covers the topic of Cubesat control centers. We'll take a look at the historical development of satellite control centers, and explain how new technology has vastly simplified the approach. The book will suggest several open source options, not only for the control center, but for the entire ground segment. We'll disucss the various functions that a Cubesat Control Center does, and where to find software packages to implement those functions. As technology advances, we have a better basis for Cubesat control centers, as well as cheaper yet more capable hardware, and better and more available software. With the proliferation of inexpensive Cubesat projects, colleges and universities, high school, and even individuals are getting their Cubesats launched. They all need control centers. For lower cost missions, these can be shared facilities. Communicating with and operating a spacecraft in orbit or on another planet is challenging, but is an extension of operating any remote system. We have communications and bandwidth issues, speed-of-light communication limitations, and complexity. Remote debugging is a always a challenge. The satellite control center is part of what is termed the Ground Segment, which also includes the communication uplink and downlink. The control center generates uplink data (commands) to the spacecraft, and receives, processes, and archives downlink (telemetry) data. The spacecraft is usually referred to as the space segment. The spacecraft usually consists of a "bus", the engineering section, and the payload, either a science instrument package or a communications package. Satellite busses can be "off-the-shelf," leading to economies of scale.The concept of the "Contropl Center as a Service" will be introduced, showing how the control center function can be implemented in the cloud. Issues of control center security will be discussed.
Cubesat Constellations, Clusters, and Swarms

Cubesat Constellations, Clusters, and Swarms

Patrick Stakem

Independently Published
2017
nidottu
This book discusses the application of Cubesat Clusters, Constellations, and Swarms in the exploration of the solar systems. This includes the Sun, the 8 primary planets and Pluto, many moon, the asteroid belt, comets, the ring systems of the four gas giants, and comets. There is a lot to explore. U.S. Spacecraft have been to all of the planets in the solar system. Although the planets (and Pluto) have been visited by spacecraft, Earth's moon has been somewhat explored, and many of the other planets' moons have been imaged, there is a lot of "filling in the blanks" to be done. Here we explore the application of groups of small independent spacecraft to take on this role. Some of the enabling technology for cooperating swarms is examined. Missions to Mars and beyond are lengthy and expensive. We need to ensure that we are delivering payloads that will function and return new data. The tradeoff is between one or two large traditional spacecraft, and a new concept, a large number of nearly identical small spacecraft, operating cooperatively. Necessarily, the Technology Readiness Level of this approach must be proven in Earth orbit, before the resources are allocated to extend this approach to distant locations. Decades of time, and hundreds of millions of dollars are at stake. The big picture is, Cubesats are not just secondary payloads anymore, They may be small, but a lot of them together can accomplish a lot. We'll discuss the technologies to make this happen.
Graphics Processing Units, an overview.

Graphics Processing Units, an overview.

Patrick Stakem

Independently Published
2017
nidottu
This book discusses the topic of Graphics Processing Units, which are specialized units found in most modern computer architectures. Although we can do operations of graphics data in regular arithmetic logic units (ALU's), the hardware approach is much faster, Just like for floating pount arithmetic, specialized units speed up the process. We will discuss the applications for GPU's, the data format, and the operations they perform. These specialized units are the backbone to video, and to a large extent audio processing in modern computer architectures. The GPU is a specialized computer architecture, focused on image data manipulation for graphics displays and picture processing. It has applications far that. The normal ALU, Arithmetic-Logic Unit, in a computer does the four basic math operations, and logical operations on integers. These integers are usually 32 or 64 bits at this time. The GPU greatly enhances the spped of 3D graphics. GPU's find application in arcade machines, games consoles, pc's, tablets, phones, car dashboards, tv's and entertainment systems. First, we'll look at the CPU, and the operations it performs on data. The CPU is fairly flexible on what it does, because of software. You can implement a GPU in software, but it won't be very fast. There's a similar co-processor, the floating point unit (FPU) that operates on specially formatted data. You can implement the floating point unit in software, actually, you can probably download the library, but it won't be as fast as using a dedicated piece of hardware. We'll first discuss integer data format, and operations on those data. The "L" part of ALU says we can also do logical (not math) operations on data. GPU's can process integer and floating point data much faster than a cpu, if it is presented in the right format. They don't have all the general purpose features of ALU's, but they can contain 100 cores or more. This has lead to the employment of large numbers of GPU's as the basis for the current generation of Supercomputers.
Visiting the NASA Centers: and Locations of Historic Rockets & Spacecraft
This book discusses the various NASA Centers across the United States, and presents what can be seen at each. Each has a specialty, and each has a visitor's center worth visiting. The guide also mentions nearby facilities of interest, such as the Smithsonian's two Air-and-Space Museums. We will cover all of the manned (U.S.) flight hardware still in existence, including some related aircraft such as the X-15 and the Shuttle Carrier Aircraft. Not all of these projects saw flight, but some boiler plate or test models survive. There is more stuff rusting away outside storage buildings at NASA Centers or at Aerospace Contractor's facilities. During work on this book, two additional manned capsules were located.The second section of the book is organized by rockets and spacecraft, and tells you where exhibits of those are located. One pilot earned his Astronaut rating in the winged X-15 vehicle, and the location of these and their carrier aricraft are listed. We also show the location of the Shuttle Carrier aircraft, the moon rocks, where to view a launch, and other space related objects.