Storage is defined as primary or _____ depending on whether it is volatile or nonvolatile.

Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.[1]: 15–16 

Storage is defined as primary or _____ depending on whether it is volatile or nonvolatile.

1 GiB of SDRAM mounted in a computer. An example of primary storage.

Storage is defined as primary or _____ depending on whether it is volatile or nonvolatile.

15 GB PATA hard disk drive (HDD) from 1999. When connected to a computer it serves as secondary storage.

Storage is defined as primary or _____ depending on whether it is volatile or nonvolatile.

160 GB SDLT tape cartridge, an example of off-line storage. When used within a robotic tape library, it is classified as tertiary storage instead.

Storage is defined as primary or _____ depending on whether it is volatile or nonvolatile.

Read/Write DVD drive with cradle for media extended

The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy,[1]: 468–473  which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast volatile technologies (which lose data when off power) are referred to as "memory", while slower persistent technologies are referred to as "storage".

Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data.

Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data.[1]: 20  Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.

A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character.

Data are encoded by assigning a bit pattern to each character, digit, or multimedia object. Many standards exist for encoding (e.g. character encodings like ASCII, image encodings like JPEG, and video encodings like MPEG-4).

By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in the storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A random bit flip (e.g. due to random radiation) is typically corrected upon detection. A bit or a group of malfunctioning physical bits (the specific defective bit is not always known; group definition depends on the specific storage device) is typically automatically fenced out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried.

Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percents) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of the trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not.

For security reasons, certain types of data (e.g. credit card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots.


Various forms of storage, divided according to their distance from the central processing unit. The fundamental components of a general-purpose computer are arithmetic and logic unit, control circuitry, storage space, and input/output devices. Technology and capacity as in common home computers around 2005.

Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit.

In contemporary usage, memory is usually semiconductor storage read-write random-access memory, typically DRAM (dynamic RAM) or other forms of fast but temporary storage. Storage consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down).[2]

Historically, memory has been called core memory, main memory, real storage, or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory, or auxiliary/peripheral storage.

Primary storage

Primary storage (also known as main memory, internal memory, or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.

Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic-core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive.

This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage are volatile, meaning that they lose the information when not powered. Besides storing opened programs, it serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as not needed by running software.[3] Spare memory can be utilized as RAM drive for temporary high-speed data storage.

As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:

  • Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are the fastest of all forms of computer data storage.
  • Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It was introduced solely to improve the performance of computers. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand, main memory is much slower, but has a much greater storage capacity than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.

Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.

As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).

Many types of "ROM" are not literally read only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly.

Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.[4]

Secondary storage


A hard disk drive (HDD) with protective cover removed

Secondary storage (also known as external memory or auxiliary storage) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.

In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (one thousandth seconds), while the access time per byte for primary storage is measured in nanoseconds (one billionth seconds). Thus, secondary storage is significantly slower than primary storage. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM disks.

Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory.[5]

Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information.

Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded.

Tertiary storage


A large tape library, with tape cartridges placed on shelves in the front, and a robotic arm moving in the back. The visible height of the library is about 180 cm.

Tertiary storage or tertiary memory[6] is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.

When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.

Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is:[7]

  • Online storage is immediately available for I/O.
  • Nearline storage is not immediately available, but can be made online quickly without human intervention.
  • Offline storage is not immediately available, and requires some human intervention to become online.

For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage.

Off-line storage

Off-line storage is a computer data storage on a medium or a device that is not under the control of a processing unit.[8] The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.

Off-line storage is used to transfer information, since the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enabling disaster recovery. Off-line storage increases general information security, since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage.

In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards.


A 1 GiB module of laptop DDR2 RAM

Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance.

Characteristic Hard disk drive Optical disc Flash memory Random-access memory Linear tape-open
Technology Magnetic disk Laser beam Semiconductor Magnetic tape
Volatility No No No Volatile No
Random access Yes Yes Yes Yes No
Latency (access time) ~15 ms (swift) ~150 ms (moderate) None (instant) None (instant) Lack of random access (very slow)
Controller Internal External Internal Internal External
Failure with imminent data loss Head crash Circuitry
Error detection Diagnostic (S.M.A.R.T.) Error rate measurement Indicated by downspiking transfer rates (Short-term storage) Unknown
Price per space Low Low High Very high Very low (but expensive drives)
Price per unit Moderate Low Moderate High Moderate (but expensive drives)
Main application Mid-term archival, server, workstation storage expansion Long-term archival, hard copy distribution Portable electronics; operating system Real-time Long-term archival


Non-volatile memory retains the stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory.

Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed, otherwise it would vanish. Static random-access memory is a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost.

An uninterruptible power supply (UPS) can be used to give a computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for example EMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes.


Read/write storage or mutable storage Allows information to be overwritten at any time. A computer without some amount of read/write storage for primary storage purposes would be useless for many tasks. Modern computers typically use read/write storage also for secondary storage. Slow write, fast read storage Read/write storage which allows information to be overwritten multiple times, but with the write operation being much slower than the read operation. Examples include CD-RW and SSD. Write once storage Write once read many (WORM) allows the information to be written only once at some point after manufacture. Examples include semiconductor programmable read-only memory and CD-R. Read only storage Retains the information stored at the time of manufacture. Examples include mask ROM ICs and CD-ROM.


Random access Any location in storage can be accessed at any moment in approximately the same amount of time. Such characteristic is well suited for primary and secondary storage. Most semiconductor memories and disk drives provide random access, though only flash memory supports random access without latency, as no mechanical parts need to be moved. Sequential access The accessing of pieces of information will be in a serial order, one after the other; therefore the time to access a particular piece of information depends upon which piece of information was last accessed. Such characteristic is typical of off-line storage.


Location-addressable Each individually accessible unit of information in storage is selected with its numerical memory address. In modern computers, location-addressable storage usually limits to primary storage, accessed internally by computer programs, since location-addressability is very efficient, but burdensome for humans. File addressable Information is divided into files of variable length, and a particular file is selected with human-readable directory and file names. The underlying device is still location-addressable, but the operating system of a computer provides the file system abstraction to make the operation more understandable. In modern computers, secondary, tertiary and off-line storage use file systems. Content-addressable Each individually accessible unit of information is selected based on the basis of (part of) the contents stored there. Content-addressable storage can be implemented using software (computer program) or hardware (computer device), with hardware being faster but more expensive option. Hardware content addressable memory is often used in a computer's CPU cache.


Raw capacity The total amount of stored information that a storage device or medium can hold. It is expressed as a quantity of bits or bytes (e.g. 10.4 megabytes). Memory storage density The compactness of stored information. It is the storage capacity of a medium divided with a unit of length, area or volume (e.g. 1.2 megabytes per square inch).


Latency The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second for tertiary storage. It may make sense to separate read latency and write latency (especially for non-volatile memory) and in case of sequential access storage, minimum, maximum and average latency. Throughput The rate at which information can be read from or written to the storage. In computer data storage, throughput is usually expressed in terms of megabytes per second (MB/s), though bit rate may also be used. As with latency, read rate and write rate may need to be differentiated. Also accessing media sequentially, as opposed to randomly, typically yields maximum throughput. Granularity The size of the largest "chunk" of data that can be efficiently accessed as a single unit, e.g. without introducing additional latency. Reliability The probability of spontaneous bit value change under various conditions, or overall failure rate.

Utilities such as hdparm and sar can be used to measure IO performance in Linux.

Energy use

  • Storage devices that reduce fan usage automatically shut-down during inactivity, and low power hard drives can reduce energy consumption by 90 percent.[9][10]
  • 2.5-inch hard disk drives often consume less power than larger ones.[11][12] Low capacity solid-state drives have no moving parts and consume less power than hard disks.[13][14][15] Also, memory may use more power than hard disks.[15] Large caches, which are used to avoid hitting the memory wall, may also consume a large amount of power.


Full disk encryption, volume and virtual disk encryption, andor file/folder encryption is readily available for most storage devices.[16]

Hardware memory encryption is available in Intel Architecture, supporting Total Memory Encryption (TME) and page granular memory encryption with multiple keys (MKTME).[17][18] and in SPARC M7 generation since October 2015.[19]

Vulnerability and reliability


S.M.A.R.T. software warning suggests impending hard drive failure

Distinct types of data storage have different points of failure and various methods of predictive failure analysis.

Vulnerabilities that can instantly lead to total loss are head crashing on mechanical hard drives and failure of electronic components on flash storage.

Error detection


Error rate measurement on a DVD+R. The minor errors are correctable and within a healthy range.

Impending failure on hard disk drives is estimable using S.M.A.R.T. diagnostic data that includes the hours of operation and the count of spin-ups, though its reliability is disputed.[20]

Flash storage may experience downspiking transfer rates as a result of accumulating errors, which the flash memory controller attempts to correct.

The health of optical media can be determined by measuring correctable minor errors, of which high counts signify deteriorating and/or low-quality media. Too many consecutive minor errors can lead to data corruption. Not all vendors and models of optical drives support error scanning.[21]

As of 2011[update], the most commonly used data storage media are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies, such as all-flash arrays (AFAs) are proposed for development.


Semiconductor memory uses semiconductor-based integrated circuit (IC) chips to store information. Data are typically stored in metal–oxide–semiconductor (MOS) memory cells. A semiconductor memory chip may contain millions of memory cells, consisting of tiny MOS field-effect transistors (MOSFETs) and/or MOS capacitors. Both volatile and non-volatile forms of semiconductor memory exist, the former using standard MOSFETs and the latter using floating-gate MOSFETs.

In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor random-access memory (RAM), particularly dynamic random-access memory (DRAM). Since the turn of the century, a type of non-volatile floating-gate semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers that are designed for them.

As early as 2006, notebook and desktop computer manufacturers started using flash-based solid-state drives (SSDs) as default configuration options for the secondary storage either in addition to or instead of the more traditional HDD.[22][23][24][25][26]


Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms:

  • Magnetic disk;
    • Floppy disk, used for off-line storage;
    • Hard disk drive, used for secondary storage.
  • Magnetic tape, used for tertiary and off-line storage;
  • Carousel memory (magnetic rolls).

In early computers, magnetic storage was also used as:

  • Primary storage in a form of magnetic memory, or core memory, core rope memory, thin-film memory and/or twistor memory;
  • Tertiary (e.g. NCR CRAM) or off line storage in the form of magnetic cards;
  • Magnetic tape was then often used for secondary storage.

Magnetic storage does not have a definite limit of rewriting cycles like flash storage and re-writeable optical media, as altering magnetic fields causes no physical wear. Rather, their life span is limited by mechanical parts.[27][28]


Optical storage, the typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile. The deformities may be permanent (read only media), formed once (write once media) or reversible (recordable or read/write media). The following forms are currently in common use:[29]

  • CD, CD-ROM, DVD, BD-ROM: Read only storage, used for mass distribution of digital information (music, video, computer programs);
  • CD-R, DVD-R, DVD+R, BD-R: Write once storage, used for tertiary and off-line storage;
  • CD-RW, DVD-RW, DVD+RW, DVD-RAM, BD-RE: Slow write, fast read storage, used for tertiary and off-line storage;
  • Ultra Density Optical or UDO is similar in capacity to BD-R or BD-RE and is slow write, fast read storage used for tertiary and off-line storage.

Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic surface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage is non-volatile, sequential access, slow write, fast read storage used for tertiary and off-line storage.

3D optical data storage has also been proposed.

Light induced magnetization melting in magnetic photoconductors has also been proposed for high-speed low-energy consumption magneto-optical storage.[30]


Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole. Barcodes make it possible for objects that are sold or transported to have some computer-readable information securely attached.

Relatively small amounts of digital data (compared to other digital data storage) may be backed up on paper as a matrix barcode for very long-term storage, as the longevity of paper typically exceeds even magnetic data storage.[31][32]

Other storage media or substrates

Vacuum-tube memory A Williams tube used a cathode-ray tube, and a Selectron tube used a large vacuum tube to store information. These primary storage devices were short-lived in the market, since the Williams tube was unreliable, and the Selectron tube was expensive.Electro-acoustic memory Delay-line memory used sound waves in a substance such as mercury to store information. Delay-line memory was dynamic volatile, cycle sequential read/write storage, and was used for primary storage.Optical tape is a medium for optical storage, generally consisting of a long and narrow strip of plastic, onto which patterns can be written and from which the patterns can be read back. It shares some technologies with cinema film stock and optical discs, but is compatible with neither. The motivation behind developing this technology was the possibility of far greater storage capacities than either magnetic tape or optical discs.Phase-change memory uses different mechanical phases of phase-change material to store information in an X–Y addressable matrix and reads the information by observing the varying electrical resistance of the material. Phase-change memory would be non-volatile, random-access read/write storage, and might be used for primary, secondary and off-line storage. Most rewritable and many write-once optical disks already use phase-change material to store information.Holographic data storage stores information optically inside crystals or photopolymers. Holographic storage can utilize the whole volume of the storage medium, unlike optical disc storage, which is limited to a small number of surface layers. Holographic storage would be non-volatile, sequential-access, and either write-once or read/write storage. It might be used for secondary and off-line storage. See Holographic Versatile Disc (HVD).Molecular memory stores information in polymer that can store electric charge. Molecular memory might be especially suited for primary storage. The theoretical storage capacity of molecular memory is 10 terabits per square inch (16 Gbit/mm2).[33]Magnetic photoconductors store magnetic information, which can be modified by low-light illumination.[30]DNA stores information in DNA nucleotides. It was first done in 2012, when researchers achieved a ratio of 1.28 petabytes per gram of DNA. In March 2017 scientists reported that a new algorithm called a DNA fountain achieved 85% of the theoretical limit, at 215 petabytes per gram of DNA.[34][35][36][37]

While a group of bits malfunction may be resolved by error detection and correction mechanisms (see above), storage device malfunction requires different solutions. The following solutions are commonly used and valid for most storage devices:

  • Device mirroring (replication) – A common solution to the problem is constantly maintaining an identical copy of device content on another device (typically of a same type). The downside is that this doubles the storage, and both devices (copies) need to be updated simultaneously with some overhead and possibly some delays. The upside is possible concurrent read of a same data group by two independent processes, which increases performance. When one of the replicated devices is detected to be defective, the other copy is still operational, and is being utilized to generate a new copy on another device (usually available operational in a pool of stand-by devices for this purpose).
  • Redundant array of independent disks (RAID) – This method generalizes the device mirroring above by allowing one device in a group of ndevices to fail and be replaced with the content restored (Device mirroring is RAID with n=2). RAID groups of n=5 or n=6 are common. n>2 saves storage, when comparing with n=2, at the cost of more processing during both regular operation (with often reduced performance) and defective device replacement.

Device mirroring and typical RAID are designed to handle a single device failure in the RAID group of devices. However, if a second failure occurs before the RAID group is completely repaired from the first failure, then data can be lost. The probability of a single failure is typically small. Thus the probability of two failures in a same RAID group in time proximity is much smaller (approximately the probability squared, i.e., multiplied by itself). If a database cannot tolerate even such smaller probability of data loss, then the RAID group itself is replicated (mirrored). In many cases such mirroring is done geographically remotely, in a different storage array, to handle also recovery from disasters (see disaster recovery above).

Network connectivity

A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors to a lesser degree.

  • Direct-attached storage (DAS) is a traditional mass storage, that does not use any network. This is still a most popular approach. This retronym was coined recently, together with NAS and SAN.
  • Network-attached storage (NAS) is mass storage attached to a computer which another computer can access at file level over a local area network, a private wide area network, or in the case of online file storage, over the Internet. NAS is commonly associated with the NFS and CIFS/SMB protocols.
  • Storage area network (SAN) is a specialized network, that provides other computers with storage capacity. The crucial difference between NAS and SAN, is that NAS presents and manages file systems to client computers, while SAN provides access at block-addressing (raw) level, leaving it to attaching systems to manage data or file systems within the provided capacity. SAN is commonly associated with Fibre Channel networks.

Robotic storage

Large quantities of individual magnetic tapes, and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known as tape libraries, and in optical storage field optical jukeboxes, or optical disk libraries per analogy. The smallest forms of either technology containing just one drive device are referred to as autoloaders or autochangers.

Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects performance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots.

Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically migrating long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are retrieved back to disk.

  • Aperture (computer memory)
  • Dynamic random-access memory (DRAM)
  • Memory latency
  • Mass storage
  • Memory cell (disambiguation)
  • Memory management
    • Memory leak
    • Virtual memory
  • Memory protection
  • Page address register
  • Stable storage
  • Static random-access memory (SRAM)
  • Cloud storage
  • Hybrid cloud storage
  • Data deduplication
  • Data proliferation
  • Data storage tag used for capturing research data
  • Disk utility
  • File system
    • List of file formats
    • Global filesystem
  • Flash memory
  • Geoplexing
  • Information repository
  • Noise-predictive maximum-likelihood detection
  • Object(-based) storage
  • Removable media
  • Solid-state drive
  • Spindle
  • Virtual tape library
  • Wait state
  • Write buffer
  • Write protection
  • Storage Networking World
  • Storage World Conference

  This article incorporates public domain material from the General Services Administration document: "Federal Standard 1037C".

  1. ^ a b c Patterson, David A.; Hennessy, John L. (2005). Computer organization and design: The hardware/software interface (3rd ed.). Amsterdam: Morgan Kaufmann Publishers. ISBN 1-55860-604-1. OCLC 56213091.
  2. ^ Storage as defined in Microsoft Computing Dictionary, 4th Ed. (c)1999 or in The Authoritative Dictionary of IEEE Standard Terms, 7th Ed., (c) 2000.
  3. ^ "Documentation for /proc/sys/vm/ — The Linux Kernel documentation".
  4. ^ "Primary storage or storage hardware (shows usage of term "primary storage" meaning "hard disk storage")". Archived from the original on 10 September 2008. Retrieved 18 June 2011.
  5. ^ J. S. Vitter (2008). Algorithms and data structures for external memory (PDF). Series on foundations and trends in theoretical computer science. Hanover, MA: now Publishers. ISBN 978-1-60198-106-6. Archived (PDF) from the original on 4 January 2011.
  6. ^ "A thesis on tertiary storage" (PDF). Archived (PDF) from the original on 27 September 2007. Retrieved 18 June 2011.
  7. ^ Pearson, Tony (2010). "Correct use of the term nearline". IBM developer-works, inside system storage. Archived from the original on 24 November 2015. Retrieved 16 August 2015.
  8. ^ National Communications System (1996). "Federal Standard 1037C – Telecommunications: Glossary of Telecommunication Terms". General Services Administration. FS-1037C. Archived from the original on 2 March 2009. Retrieved 8 October 2007. {{cite journal}}: Cite journal requires |journal= (help) See also article Federal standard 1037C.
  9. ^ "Energy savings calculator". Archived from the original on 21 December 2008.
  10. ^ "How much of the [re]drive is actually eco-friendly?". Simple tech. Archived from the original on 5 August 2008.
  11. ^ Mike Chin (8 March 2004). "IS the Silent PC Future 2.5-inches wide?". Archived from the original on 20 July 2008. Retrieved 2 August 2008.
  12. ^ Mike Chin (18 September 2002). "Recommended hard drives". Archived from the original on 5 September 2008. Retrieved 2 August 2008.
  13. ^ "Super Talent's 2.5" IDE flash hard drive". The tech report. 12 July 2006. p. 13. Archived from the original on 26 January 2012. Retrieved 18 June 2011.
  14. ^ "Power consumption – Tom's hardware : Conventional hard drive obsoletism? Samsung's 32 GB flash drive previewed". 20 September 2006. Retrieved 18 June 2011.
  15. ^ a b Aleksey Meyev (23 April 2008). "SSD, i-RAM and traditional hard disk drives". X-bit labs. Archived from the original on 18 December 2008.
  16. ^ "Guide to storage encryption technologies for end user devices" (PDF). U.S. national institute of standards and technology. November 2007.
  17. ^ "Encryption specs" (PDF). Retrieved 28 December 2019.
  18. ^ "A proposed API for full-memory encryption". Retrieved 28 December 2019.
  19. ^ "Introduction to SPARC M7 and silicon secured memory (SSM)". Archived from the original on 21 January 2019. Retrieved 28 December 2019.
  20. ^ "What S.M.A.R.T. hard disk errors actually tell us". Backblaze. 6 October 2016.
  21. ^ "QPxTool - check the quality".
  22. ^ "New Samsung notebook replaces hard drive with flash". Extreme tech. 23 May 2006. Archived from the original on 30 December 2010. Retrieved 18 June 2011.
  23. ^ "Toshiba tosses hat into notebook flash storage ring". Archived from the original on 18 March 2012. Retrieved 18 June 2011.
  24. ^ "Mac Pro – Storage and RAID options for your Mac Pro". Apple. 27 July 2006. Archived from the original on 6 June 2013. Retrieved 18 June 2011.
  25. ^ "MacBook Air – The best of iPad meets the best of Mac". Apple. Archived from the original on 27 May 2013. Retrieved 18 June 2011.
  26. ^ "MacBook Air replaces the standard notebook hard disk for solid state flash storage". 15 November 2010. Archived from the original on 23 August 2011. Retrieved 18 June 2011.
  27. ^ "Comparing SSD and HDD endurance in the age of QLC SSDs" (PDF). Micron technology.
  28. ^ "Comparing SSD and HDD - A comprehensive comparison of the storage drives".
  29. ^ "The DVD FAQ - A comprehensive reference of DVD technologies". Archived from the original on 22 August 2009.
  30. ^ a b Náfrádi, Bálint (24 November 2016). "Optically switched magnetism in photovoltaic perovskite CH3NH3(Mn:Pb)I3". Nature Communications. 7: 13406. arXiv:1611.08205. Bibcode:2016NatCo...713406N. doi:10.1038/ncomms13406. PMC 5123013. PMID 27882917.
  31. ^ "A paper-based backup solution (not as stupid as it sounds)". 14 August 2012.
  32. ^ Sterling, Bruce (16 August 2012). "PaperBack paper backup". Wired.
  33. ^ "New method of self-assembling nanoscale elements could transform data storage industry". 1 March 2009. Archived from the original on 1 March 2009. Retrieved 18 June 2011.
  34. ^ Yong, Ed. "This speck of DNA contains a movie, a computer virus, and an Amazon gift card". The Atlantic. Archived from the original on 3 March 2017. Retrieved 3 March 2017.
  35. ^ "Researchers store computer operating system and short movie on DNA". Archived from the original on 2 March 2017. Retrieved 3 March 2017.
  36. ^ "DNA could store all of the world's data in one room". Science Magazine. 2 March 2017. Archived from the original on 2 March 2017. Retrieved 3 March 2017.
  37. ^ Erlich, Yaniv; Zielinski, Dina (2 March 2017). "DNA Fountain enables a robust and efficient storage architecture". Science. 355 (6328): 950–954. Bibcode:2017Sci...355..950E. doi:10.1126/science.aaj2038. PMID 28254941. S2CID 13470340.

  • Goda, K.; Kitsuregawa, M. (2012). "The history of storage systems". Proceedings of the IEEE. 100: 1433–1440. doi:10.1109/JPROC.2012.2189787.
  • Memory & storage, Computer history museum

Retrieved from ""

Page 2

3D optical data storage is any form of optical data storage in which information can be recorded or read with three-dimensional resolution (as opposed to the two-dimensional resolution afforded, for example, by CD).[1][2]

This innovation has the potential to provide petabyte-level mass storage on DVD-sized discs (120 mm). Data recording and readback are achieved by focusing lasers within the medium. However, because of the volumetric nature of the data structure, the laser light must travel through other data points before it reaches the point where reading or recording is desired. Therefore, some kind of nonlinearity is required to ensure that these other data points do not interfere with the addressing of the desired point.

No commercial product based on 3D optical data storage has yet arrived on the mass market, although several companies are actively developing the technology and claim that it may become available 'soon'.

Current optical data storage media, such as the CD and DVD store data as a series of reflective marks on an internal surface of a disc. In order to increase storage capacity, it is possible for discs to hold two or even more of these data layers, but their number is severely limited since the addressing laser interacts with every layer that it passes through on the way to and from the addressed layer. These interactions cause noise that limits the technology to approximately 10 layers. 3D optical data storage methods circumvent this issue by using addressing methods where only the specifically addressed voxel (volumetric pixel) interacts substantially with the addressing light. This necessarily involves nonlinear data reading and writing methods, in particular nonlinear optics.

3D optical data storage is related to (and competes with) holographic data storage. Traditional examples of holographic storage do not address in the third dimension, and are therefore not strictly "3D", but more recently 3D holographic storage has been realized by the use of microholograms. Layer-selection multilayer technology (where a multilayer disc has layers that can be individually activated e.g. electrically) is also closely related.


Schematic representation of a cross-section through a 3D optical storage disc (yellow) along a data track (orange marks). Four data layers are seen, with the laser currently addressing the third from the top. The laser passes through the first two layers and only interacts with the third, since here the light is at a high intensity.

As an example, a prototypical 3D optical data storage system may use a disc that looks much like a transparent DVD. The disc contains many layers of information, each at a different depth in the media and each consisting of a DVD-like spiral track. In order to record information on the disc a laser is brought to a focus at a particular depth in the media that corresponds to a particular information layer. When the laser is turned on it causes a photochemical change in the media. As the disc spins and the read/write head moves along a radius, the layer is written just as a DVD-R is written. The depth of the focus may then be changed and another entirely different layer of information written. The distance between layers may be 5 to 100 micrometers, allowing >100 layers of information to be stored on a single disc.

In order to read the data back (in this example), a similar procedure is used except this time instead of causing a photochemical change in the media the laser causes fluorescence. This is achieved e.g. by using a lower laser power or a different laser wavelength. The intensity or wavelength of the fluorescence is different depending on whether the media has been written at that point, and so by measuring the emitted light the data is read.

The size of individual chromophore molecules or photoactive color centers is much smaller than the size of the laser focus (which is determined by the diffraction limit). The light therefore addresses a large number (possibly even 109) of molecules at any one time, so the medium acts as a homogeneous mass rather than a matrix structured by the positions of chromophores.

The origins of the field date back to the 1950s, when Yehuda Hirshberg developed the photochromic spiropyrans and suggested their use in data storage.[3] In the 1970s, Valerii Barachevskii demonstrated[4] that this photochromism could be produced by two–photon excitation, and finally at the end of the 1980s Peter M. Rentzepis showed that this could lead to three-dimensional data storage.[5] Most of the developed systems are based to some extent on the original ideas of Rentzepis. A wide range of physical phenomena for data reading and recording have been investigated, large numbers of chemical systems for the medium have been developed and evaluated, and extensive work has been carried out in solving the problems associated with the optical systems required for the reading and recording of data. Currently, several groups remain working on solutions with various levels of development and interest in commercialization.

Data recording in a 3D optical storage medium requires that a change take place in the medium upon excitation. This change is generally a photochemical reaction of some sort, although other possibilities exist. Chemical reactions that have been investigated include photoisomerizations, photodecompositions and photobleaching, and polymerization initiation. Most investigated have been photochromic compounds, which include azobenzenes, spiropyrans, stilbenes, fulgides, and diarylethenes. If the photochemical change is reversible, then rewritable data storage may be achieved, at least in principle. Also, MultiLevel Recording, where data is written in "grayscale" rather than as "on" and "off" signals, is technically feasible.

Writing by nonresonant multiphoton absorption

Although there are many nonlinear optical phenomena, only multiphoton absorption is capable of injecting into the media the significant energy required to electronically excite molecular species and cause chemical reactions. Two-photon absorption is the strongest multiphoton absorbance by far, but still it is a very weak phenomenon, leading to low media sensitivity. Therefore, much research has been directed at providing chromophores with high two-photon absorption cross-sections.[6]

Writing by two-photon absorption can be achieved by focusing the writing laser on the point where the photochemical writing process is required. The wavelength of the writing laser is chosen such that it is not linearly absorbed by the medium, and therefore it does not interact with the medium except at the focal point. At the focal point two-photon absorption becomes significant, because it is a nonlinear process dependent on the square of the laser fluence.

Writing by two-photon absorption can also be achieved by the action of two lasers in coincidence. This method is typically used to achieve the parallel writing of information at once. One laser passes through the media, defining a line or plane. The second laser is then directed at the points on that line or plane that writing is desired. The coincidence of the lasers at these points excited two-photon absorption, leading to writing photochemistry.

Writing by sequential multiphoton absorption

Another approach to improving media sensitivity has been to employ resonant two-photon absorption (also known as "1+1" or "sequential" two–photon absorbance). Nonresonant two-photon absorption (as is generally used) is weak since in order for excitation to take place, the two exciting photons must arrive at the chromophore at almost exactly the same time. This is because the chromophore is unable to interact with a single photon alone. However, if the chromophore has an energy level corresponding to the (weak) absorption of one photon then this may be used as a stepping stone, allowing more freedom in the arrival time of photons and therefore a much higher sensitivity. However, this approach results in a loss of nonlinearity compared to nonresonant two–photon absorbance (since each two-photon absorption step is essentially linear), and therefore risks compromising the 3D resolution of the system.


In microholography, focused beams of light are used to record submicrometre-sized holograms in a photorefractive material, usually by the use of collinear beams. The writing process may use the same kinds of media that are used in other types of holographic data storage, and may use two–photon processes to form the holograms.

Data recording during manufacturing

Data may also be created in the manufacturing of the media, as is the case with most optical disc formats for commercial data distribution. In this case, the user can not write to the disc – it is a ROM format. Data may be written by a nonlinear optical method, but in this case the use of very high power lasers is acceptable so media sensitivity becomes less of an issue.

The fabrication of discs containing data molded or printed into their 3D structure has also been demonstrated. For example, a disc containing data in 3D may be constructed by sandwiching together a large number of wafer-thin discs, each of which is molded or printed with a single layer of information. The resulting ROM disc can then be read using a 3D reading method.

Other approaches to writing

Other techniques for writing data in three-dimensions have also been examined, including:

Persistent spectral hole burning (PSHB), which also allows the possibility of spectral multiplexing to increase data density. However, PSHB media currently requires extremely low temperatures to be maintained in order to avoid data loss.

Void formation, where microscopic bubbles are introduced into a media by high intensity laser irradiation.[7]

Chromophore poling, where the laser-induced reorientation of chromophores in the media structure leads to readable changes.[8]

The reading of data from 3D optical memories has been carried out in many different ways. While some of these rely on the nonlinearity of the light-matter interaction to obtain 3D resolution, others use methods that spatially filter the media's linear response. Reading methods include:

Two photon absorption (resulting in either absorption or fluorescence). This method is essentially two-photon microscopy.

Linear excitation of fluorescence with confocal detection. This method is essentially confocal laser scanning microscopy. It offers excitation with much lower laser powers than does two-photon absorbance, but has some potential problems because the addressing light interacts with many other data points in addition to the one being addressed.

Measurement of small differences in the refractive index between the two data states. This method usually employs a phase-contrast microscope or confocal reflection microscope. No absorption of light is necessary, so there is no risk of damaging data while reading, but the required refractive index mismatch in the disc may limit the thickness (i.e., number of data layers) that the media can reach due to the accumulated random wavefront errors that destroy the focused spot quality.

Second-harmonic generation has been demonstrated as a method to read data written into a poled polymer matrix.[9]

Optical coherence tomography has also been demonstrated as a parallel reading method.[10]

The active part of 3D optical storage media is usually an organic polymer either doped or grafted with the photochemically active species. Alternatively, crystalline and sol-gel materials have been used.

Media form factor

Media for 3D optical data storage have been suggested in several form factors: disk, card and crystal.

A disc media offers a progression from CD/DVD, and allows reading and writing to be carried out by the familiar spinning disc method.

A credit card form factor media is attractive from the point of view of portability and convenience, but would be of a lower capacity than a disc.

Several science fiction writers have suggested small solids that store massive amounts of information, and at least in principle this could be achieved with 5D optical data storage.

Media manufacturing

The simplest method of manufacturing – the molding of a disk in one piece – is a possibility for some systems. A more complex method of media manufacturing is for the media to be constructed layer by layer. This is required if the data is to be physically created during manufacture. However, layer-by-layer construction need not mean the sandwiching of many layers together. Another alternative is to create the medium in a form analogous to a roll of adhesive tape.[11]

A drive designed to read and write to 3D optical data storage media may have a lot in common with CD/DVD drives, particularly if the form factor and data structure of the media is similar to that of CD or DVD. However, there are a number of notable differences that must be taken into account when designing such a drive.


Particularly when two-photon absorption is utilized, high-powered lasers may be required that can be bulky, difficult to cool, and pose safety concerns. Existing optical drives utilize continuous wave diode lasers operating at 780 nm, 658 nm, or 405 nm. 3D optical storage drives may require solid-state lasers or pulsed lasers, and several examples use wavelengths easily available by these technologies, such as 532 nm (green). These larger lasers can be difficult to integrate into the read/write head of the optical drive.

Variable spherical aberration correction

Because the system must address different depths in the medium, and at different depths the spherical aberration induced in the wavefront is different, a method is required to dynamically account for these differences. Many possible methods exist that include optical elements that swap in and out of the optical path, moving elements, adaptive optics, and immersion lenses.

Optical system

In many examples of 3D optical data storage systems, several wavelengths (colors) of light are used (e.g. reading laser, writing laser, signal; sometimes even two lasers are required just for writing). Therefore, as well as coping with the high laser power and variable spherical aberration, the optical system must combine and separate these different colors of light as required.


In DVD drives, the signal produced from the disc is a reflection of the addressing laser beam, and is therefore very intense. For 3D optical storage however, the signal must be generated within the tiny volume that is addressed, and therefore it is much weaker than the laser light. In addition, fluorescence is radiated in all directions from the addressed point, so special light collection optics must be used to maximize the signal.

Data tracking

Once they are identified along the z-axis, individual layers of DVD-like data may be accessed and tracked in similar ways to DVDs. The possibility of using parallel or page-based addressing has also been demonstrated. This allows much faster data transfer rates, but requires the additional complexity of spatial light modulators, signal imaging, more powerful lasers, and more complex data handling.

Despite the highly attractive nature of 3D optical data storage, the development of commercial products has taken a significant length of time. This results from limited financial backing in the field, as well as technical issues, including:

Destructive reading. Since both the reading and the writing of data are carried out with laser beams, there is a potential for the reading process to cause a small amount of writing. In this case, the repeated reading of data may eventually serve to erase it (this also happens in phase change materials used in some DVDs). This issue has been addressed by many approaches, such as the use of different absorption bands for each process (reading and writing), or the use of a reading method that does not involve the absorption of energy.

Thermodynamic stability. Many chemical reactions that appear not to take place in fact happen very slowly. In addition, many reactions that appear to have happened can slowly reverse themselves. Since most 3D media are based on chemical reactions, there is therefore a risk that either the unwritten points will slowly become written or that the written points will slowly revert to being unwritten. This issue is particularly serious for the spiropyrans, but extensive research was conducted to find more stable chromophores for 3D memories.

Media sensitivity. two-photon absorption is a weak phenomenon, and therefore high power lasers are usually required to produce it. Researchers typically use Ti-sapphire lasers or Nd:YAG lasers to achieve excitation, but these instruments are not suitable for use in consumer products.

Much of the development of 3D optical data storage has been carried out in universities. The groups that have provided valuable input include:

  • Peter T. Rentzepis was the originator of this field, and has recently developed materials free from destructive readout.
  • Watt W. Webb codeveloped the two-photon microscope in Bell Labs, and showed 3D recording on photorefractive media.
  • Masahiro Irie developed the diarylethene family of photochromic materials.[12]
  • Yoshimasa Kawata, Satoshi Kawata, and Zouheir Sekkat have developed and worked on several optical data manipulation systems, in particular involving poled polymer systems.[13]
  • Kevin C Belfield is developing photochemical systems for 3D optical data storage by the use of resonance energy transfer between molecules, and also develops high two–photon cross-section materials.[14]
  • Seth Marder performed much of the early work developing logical approaches to the molecular design of high two–photon cross-section chromophores.
  • Tom Milster has made many contributions to the theory of 3D optical data storage.[15]
  • Robert McLeod has examined the use of microholograms for 3D optical data storage.
  • Min Gu has examined confocal readout and methods for its enhancement.[16][17]

In addition to the academic research, several companies have been set up to commercialize 3D optical data storage and some large corporations have also shown an interest in the technology. However, it is not yet clear whether the technology will succeed in the market in the presence of competition from other quarters such as hard drives, flash storage, and holographic storage.


Examples of 3D optical data storage media. Top row – written Call/Recall media; Mempile media. Middle row – FMD; D-Data DMD and drive. Bottom row – Landauer media; Microholas media in action.

  • Call/Recall was founded in 1987 on the basis of Peter Rentzepis' research. Using two–photon recording (at 25 Mbit/s with 6.5 ps, 7 nJ, 532 nm pulses), one–photon readout (with 635 nm), and a high NA (1.0) immersion lens, they have stored 1 TB as 200 layers in a 1.2 mm thick disk.[18] They aim to improve capacity to >5 TB and data rates to up to 250 Mbit/s within a year, by developing new materials as well as high-powered pulsed blue laser diodes.
  • Mempile are developing a commercial system with the name TeraDisc. In March 2007, they demonstrated the recording and readback of 100 layers of information on a 0.6 mm thick disc, as well as low crosstalk, high sensitivity, and thermodynamic stability.[19] They intend to release a red-laser 0.6-1.0 TB consumer product in 2010, and have a roadmap to a 5 TB blue-laser product.[20]
  • Constellation 3D developed the Fluorescent Multilayer Disc at the end of the 1990s, which was a ROM disk, manufactured layer by layer. The company failed in 2002, but the intellectual property (IP) was acquired by D-Data Inc.,[21] who are attempting to introduce it as the Digital Multilayer Disk (DMD).
  • Storex Technologies has been set up to develop 3D media based on fluorescent photosensitive glasses and glass-ceramic materials. The technology derives from the patents of the Romanian scientist Eugen Pavel, who is also the founder and CEO of the company. At ODS2010 conference were presented results regarding readout by two non-fluorescence methods of a Petabyte Optical Disc.
  • Landauer Inc. are developing a media based on resonant two-photon absorption in a sapphire single crystal substrate. In May 2007, they showed the recording of 20 layers of data using 2 nJ of laser energy (405 nm) for each mark. The reading rate is limited to 10 Mbit/s because of the fluorescence lifetime.[22]
  • Colossal Storage aim to develop a 3D holographic optical storage technology based on photon-induced electric field poling using a far UV laser to obtain large improvements over current data capacity and transfer rates, but as yet they have not presented any experimental research or feasibility study.
  • Microholas operates out of the University of Berlin, under the leadership of Prof Susanna Orlic, and has achieved the recording of up to 75 layers of microholographic data, separated by 4.5 micrometres, and suggesting a data density of 10 GB per layer.[23][24]
  • 3DCD Technology Pty. Ltd. is a university spin-off set up to develop 3D optical storage technology based on materials identified by Daniel Day and Min Gu.[25]
  • Several large technology companies such as Fuji, Ricoh, and Matsushita have applied for patents on two–photon-responsive materials for applications including 3D optical data storage, however they have not given any indication that they are developing full data storage solutions.
  • Dual layer
  • 5D optical data storage
  • List of emerging technologies

  1. ^ Kawata, S.; Kawata, Y. (2000). "Three-Dimensional Optical Data Storage Using Photochromic Materials". Chemical Reviews. 100 (5): 1777–88. doi:10.1021/cr980073p. PMID 11777420.
  2. ^ Burr, G.W. (2003). Three-Dimensional Optical Storage (PDF). SPIE Conference on Nano-and Micro-Optics for Information Systems. pp. 5225–16. Archived from the original (PDF) on March 8, 2008.
  3. ^ Hirshberg, Yehuda (1956). "Reversible Formation and Eradication of Colors by Irradiation at Low Temperatures. A Photochemical Memory Model". Journal of the American Chemical Society. 78 (10): 2304–2312. doi:10.1021/ja01591a075.
  4. ^ Mandzhikov, V. F.; Murin, V. A.; Barachevskii, Valerii A. (1973). "Nonlinear coloration of photochromic spiropyran solutions". Soviet Journal of Quantum Electronics. 3 (2): 128. doi:10.1070/QE1973v003n02ABEH005060.
  5. ^ Parthenopoulos, Dimitri A.; Rentzepis, Peter M. (1989). "Three-Dimensional Optical Storage Memory". Science. 245 (4920): 843–45. Bibcode:1989Sci...245..843P. doi:10.1126/science.245.4920.843. PMID 17773360. S2CID 7494304.
  6. ^ Albota, Marius; Beljonne, David; Brédas, Jean-Luc; Ehrlich, Jeffrey E.; Fu, Jia-Ying; Heikal, Ahmed A.; Hess, Samuel E.; Kogej, Thierry; Levin, Michael D.; Marder, Seth R.; McCord-Maughon, Dianne; Perry, Joseph W.; Röckel, Harald; Rumi, Mariacristina; Subramaniam, Girija; Webb, Watt W.; Wu, Xiang-Li; Xu, Chris (1998). "Design of Organic Molecules with Large Two-Photon Absorption Cross Sections". Science. 281 (5383): 1653–56. Bibcode:1998Sci...281.1653A. doi:10.1126/science.281.5383.1653. PMID 9733507.
  7. ^ Day, Daniel; Gu, Min (2002). "Formation of voids in a doped polymethylmethacrylate polymer". Applied Physics Letters. 80 (13): 2404–2406. Bibcode:2002ApPhL..80.2404D. doi:10.1063/1.1467615. hdl:1959.3/1948.
  8. ^ Gindre, Denis; Boeglin, Alex; Fort, Alain; Mager, Loïc; Dorkenoo, Kokou D. (2006). "Rewritable optical data storage in azobenzene copolymers". Optics Express. 14 (21): 9896–901. Bibcode:2006OExpr..14.9896G. doi:10.1364/OE.14.009896. PMID 19529382.
  9. ^ Fort, A. F.; Barsella, A.; Boeglin, A. J.; Mager, L.; Gindre, D.; Dorkenoo, K. D. (29 August 2007). Optical storage through second harmonic signals in organic films. SPIE Optics+Photonics. San Diego, USA. pp. 6653–10.
  10. ^ Reyes-Esqueda, Jorge-Alejandro; Vabreb, Laurent; Lecaque, Romain; Ramaz, François; Forget, Benoît C.; Dubois, Arnaud; Briat, Bernard; Boccara, Claude; Roger, Gisèle; Canva, Michael; Lévy, Yves; Chaput, Frédéric; Boilot, Jean-Pierre (May 2003). "Optical 3D-storage in sol–gel materials with a reading by optical coherence tomography-technique". Optics Communications. 220 (1–3): 59–66. arXiv:cond-mat/0602531. Bibcode:2003OptCo.220...59R. doi:10.1016/S0030-4018(03)01354-3. S2CID 119092748.
  11. ^ US patent 6386458, Leiber, Jörn; Noehte, Steffen & Gerspach, Matthias, "Optical data storage", issued 2002-05-14, assigned to Tesa SE 
  12. ^ Irie, Masahiro (2000). "Diarylethenes for Memories and Switches". Chemical Reviews. 100 (5): 1685–716. doi:10.1021/cr980069d. PMID 11777416.
  13. ^ Kawata, Y.; Kawata, S. "16: 3D Data Storage and Near-Field Recording". In Sekkat, Z.; Knoll, W. (eds.). Photoreactive Organic Thin Films. USA: Elsevier. ISBN 0-12-635490-1.
  14. ^ Won, Rachel Pei Chin (16 November 2016). "Two photons are better than one". Nature Photonics: 1. doi:10.1038/nphoton.2006.47.
  15. ^ Milster, T. D.; Zhang, Y.; Choi, T. Y.; Park, S. K.; Butz, J.; Bletscher, W. "Potential for Volumetric Bit-Wise Optical Data Storage in Space Applications" (PDF). Archived from the original (PDF) on 4 October 2006.
  16. ^ Amistoso, Jose Omar; Gu, Min; Kawata, Satoshi (2002). "Characterization of a Confocal Microscope Readout System in a Photochromic Polymer under Two-Photon Excitation". Japanese Journal of Applied Physics. 41 (8): 5160–5165. Bibcode:2002JaJAP..41.5160A. doi:10.1143/JJAP.41.5160.
  17. ^ Gu, Min; Amistoso, Jose Omar; Toriumi, Akiko; Irie, Masahiro; Kawata, Satoshi (2001). "Effect of Saturable Response to Two-Photon Absorption on the Readout Signal Level of Three-Dimensional Bit Optical Data Storage in a Photochromic Polymer" (PDF). Applied Physics Letters. 79 (2): 148–150. Bibcode:2001ApPhL..79..148G. doi:10.1063/1.1383999. hdl:1959.3/1798.
  18. ^ Walker, E; Rentzepis, P (2008). "Two Photon Technology: A New Dimension". Nature Photonics. 2 (7): 406–408. Bibcode:2008NaPho...2..406W. doi:10.1038/nphoton.2008.121.
  19. ^ Shipway, Andrew N.; Greenwald, Moshe; Jaber, Nimer; Litwak, Ariel M.; Reisman, Benjamin J. (2006). "A New Medium for Two-Photon Volumetric Data Recording and Playback". Japanese Journal of Applied Physics. 45 (2B): 1229–1234. Bibcode:2006JaJAP..45.1229S. doi:10.1143/JJAP.45.1229.
  20. ^ Genuth, Iddo (27 August 2007). "Mempile - Terabyte on a CD". TFOT. Archived from the original on 15 September 2007.
  21. ^ D-Data corporate website
  22. ^ Akselrod, M. S.; Orlov, S. S.; Sykora, G. J.; Dillin, K. J.; Underwood, T. H. (2007). Progress in Bit-Wise Volumetric Optical Storage Using Alumina-Based Media. Optical Data Storage. The Optical Society of America. doi:10.1364/ODS.2007.MA2.
  23. ^ Criante, L.; Vita, F.; Castagna, R.; Lucchetta, D. E.; Frohmann, S.; Feid, T.; Simoni, F. F.; Orlic, S. (28 August 2007). New composite blue sensitive materials for high resolution optical data storage. SPIE Optics+Photonics. San Diego, USA: SPIE. pp. 6657–03.
  24. ^ Orlic, S.; Markötter, H.; Mueller, C.; Rauch, C.; Schloesser, A. (28 August 2007). 3D nano and micro structurization of polymer nanocomposites for optical sensing and image processing. SPIE Optics+Photonics. San Diego, USA: SPIE. pp. 6657–14.
  25. ^ "Swinburne Ventures". Swinburne University of Technology. Archived from the original on 5 August 2012.

Retrieved from ""