Who invented retinal display




















Linux BSD is open source and developed by the community, so selling a license would land Apple in hot water. Instead, you are given the operating system for free when you buy your overpriced hardware. Apple Maps data was purchased from Nokia, once again nothing too much that Apple had its hands in.

You must be logged in to post a comment. Skip to content Most people think that Apple invented the Retina Display, in fact, they did not. So what does Apple do? Typically, the virtual display creates only a small physical image using a liquid crystal array, light emitting diodes or a miniature cathode ray tube, CRT, the image being projected by optical lenses and mirrors so that the image appears to be a large picture suspended in the world.

A miniature cathode ray tube can produce a medium resolution monochrome picture. However, these devices are heavy and bulky.

For example, a typical weight of a miniature CRT with cables is greater than four ounces, the CRT having a one inch diameter and a four inch length. Further, these devices have high voltage acceleration potential, typically kilovolts which is undesirably high for a display that is mounted on a user's head. Creating color using a single miniature CRT is difficult and usually causes significant compromises in image resolution and luminance.

Although the CRT image may be relayed via a coherent fiber-optics bundle to allow the CRT to be located away from head mounted optics, the hardware to accomplish this is also heavy and causes significant light loss. Field sequential color using a multiplexed color filter and CRT with white phosphor is able to create good color hue saturation but also at a significantly reduced resolution. For example, three color fields must be produced during the same period as a normal 60 Hz field, thereby dividing the video bandwidth for each color by three.

A liquid crystal array can produce a color image using a low operating voltage, but it can provide only a marginal picture element pixel density, i. One commercial device is known that uses a linear array of light emitting diodes viewed via a vibrating mirror and a simple magnifier. Although this is a low cost and low power alternative, the display is monochrome and limited in line resolution to the number of elements which can be incorporated into the linear array. Both the CRT and liquid crystal display generate real images which are relayed to the eyes through an infinity optical system.

The simplest optical system allows a user to view the image source through a simple magnifier lens. Further, these optics are bulky and heavy. Virtual projection optical designs create an aerial image somewhere in the optical path at an image plane which is then viewed as an erect virtual image via an eye piece or objective lens. This approach increases the flexibility by which the image from the image source can be folded around the user's head for a head mounted display system, but large fields of view require large and bulky reflective and refractive optical elements.

In addition to resolution limitations, current systems also have bandwidth deficiencies. Bandwidth is a measure of how fast the display system can address, modulate or change the light emissions of the display elements of the image source.

The bandwidth of the display image source is computed on the basis of the number of elements which must be addressed over a given period of time. Addressing elements temporally is needed to refresh or maintain a perceived luminance of each element taking into account the light integration dynamics of retinal receptors and the rate at which information is likely to change.

The minimum refresh rate is a function of the light adaptive state of the eye, display luminance, and pixel persistence, i. Minimum refresh rates of 50 to 60 times a second are typically needed for television type displays. Further, an update rate of at least 30 Hz is needed to perceive continuous movement in a dynamic display or in a presentation in which the display image is stabilized as a result of head movement.

Refreshing sequentially, i. Bandwidth requirements can be reduced by interlacing which tricks the eye in its perception of flicker but still requires that all of the elements of the image source be addressed to achieve a minimum update rate of 30 Hz or 1. Typical television broadcast quality bandwidths are approximately 8 MHz, or two orders of magnitude less than the 1. High resolution computer terminals have by picture elements which are addressed at a 70 Hz non-interlaced rate which is the equivalent to a bandwidth of approximately MHz.

In accordance with the present invention, the disadvantages of prior virtual image display systems have been overcome. The virtual retinal display of the present invention utilizes photon generation and manipulation to create a panoramic, high resolution, color virtual image that is projected directly onto the retina of the eye. The entrance pupil of the eye and the exit pupil or aperture of the virtual retinal display are coupled so that modulated light from a photon generator is scanned directly on to the retina producing the perception of an erect virtual image without an image plane outside of the user's eye, there being no real or aerial image that is viewed via a mirror or optics.

More particularly, the virtual retinal display system of the present invention includes a source of photons modulated with video information, the photons being scanned directly onto the retina of the user's eye.

The photon generator utilized may produce coherent light such as a laser or it may produce non-coherent light. Further, the photon generator may include colored light generators such as red, green and blue light emitting diodes or lasers to provide colored light that is modulated with respective RGB video information.

If a blue light source is not available, a yellow light source such as a yellow light emitting diode or laser may be used. The video modulated colored photons are combined and then scanned onto the retina. The video modulated signals are preferably scanned in both a horizontal and a vertical direction so as to produce a modulated light raster that is projected directly onto the user's eye by projection optics.

The projection optics may include a toroidal or spherical optical element such as a refractive lens, mirror, holographic element, etc. Further, this optical element may be a light occluding element or it may be light transmissive.

A light transmissive optical element allows the virtual retinal display of the present invention to be a see through display wherein the displayed virtual image is perceived by the user to be superimposed on the real world. Further, the light transmissiveness of the optical element may be actively or passively variable.

The virtual retinal display system of the present invention further includes a depth cue for 3-D imaging so as to reduce problems of "simulator sickness" that may occur with known stereoscopic display systems. More particularly, the depth cue varies the focus, i. Depth information may be stored in a Z axis buffer or the like in a video memory in addition to the horizontal and vertical information typically stored in a video frame buffer.

A pupil tracking system may be employed to move the position of the light raster projected onto the eye so that it approximately coincides with the entrance pupil of the user's eye. This feature increases the resolution of the virtual retinal display and further increases the field of view to provide a fully immersive environment such that as the eye moves to one side, a view corresponding to that direction may be presented.

This is accomplished by utilizing the detected pupil position to position a "visible window" on the video information stored in the frame buffer.

The frame buffer may for example store video information representing a panoramic view and the position of the visible window determines which part of the view the user is to perceive, the video information falling within the visible window being used to modulate the light from the photon generator. The virtual display system of the present invention may also divide the video information into sectors or regions and use parallel photon generation and modulation to obtain ideal pixel density resolution across very wide fields of view.

Further, by allowing the overall pixel density to be divided into separately scanned regions the bandwidth is reduced by the number of regions so as to overcome the bandwidth problems of prior systems.

Further, the virtual retinal display of the present invention is very small in size, weight and bulk since it is not necessary to produce either or a real or an aerial image. Because of its small size, weight and compactness the virtual retinal display is ideally suited for mounting on a user's head. These and other objects, advantages and novel features of the present invention as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and the drawing.

The virtual retinal display 10 of the present invention as shown in FIG. Nor does the virtual retinal display 10 need the mirrors or optics necessary in prior virtual image displays to generate an aerial image. Instead, photons modulated with video information are scanned directly onto the retina 22 of a user's eye 20 to produce the perception of an erect virtual image. Because the virtual retinal display 10 does not utilize a real image display or the mirrors or optics necessary to generate an aerial image, the virtual retinal display 10 is small in size and weight and is therefore suitable to be easily mounted on the user's head as a head mounted display.

More particularly, as shown in FIG. The modulated photons are scanned in a first direction and in a second direction generally perpendicular to the first direction by a scanner 16 to create a raster of photons that is projected directly onto the retina 22 of the eye 20 of the user by projection optics 18 to produce the perception of an erect virtual image without an aerial image or image plane outside of the eye that is viewed or perceived by the user.

Although not necessary, it is desirable to employ an eye tracking system 24 to reposition the scanned raster of light as the pupil 26 of the eye 20 moves so that the light ray bundles are coincident with the entrance pupil of the eye.

The eye tracking system 24 can also be used as feedback to change the image or the focus of the image scanned onto the retina as the eye moves so that the user perceives that he is focusing on a different portion of a panoramic scene as he shifts his eye.

It is noted that the dotted lines shown entering the eye 20 in FIG. The photon generator 12 may generate coherent light such as a laser or it may generate noncoherent light such as by utilizing one or more LEDs. Further, beams of red, green and yellow or blue light may be modulated by RGY or RGB video signals to scan colored photons directly onto the user's eye. In order to reduce the bandwidth of the virtual retinal display, multiple monochromatic beams or multiple groups of colored beams can be modulated and scanned in parallel onto the retina where the video information used to modulate the photons is divided into different sectors or regions and each beam or group of colored beams is associated with a different sector of video information as described below.

It is further noted that the functions performed by one or more of the photon generator 12, modulator 14, scanner 16 and projection optics 18 can be combined to be performed by fewer elements depending upon the actual components used in the system.

For example, an acousto-optic deflector may be used to both modulate the light from the photon generator 12 and to scan the modulated light in at least one direction. Further, a laser phased array may be utilized to perform the functions of the photon generator, modulator and one or possibly two scanners as discussed below. The components of the virtual retinal display 10 can be made small, compact and lightweight so that the virtual retinal display 10 can easily be mounted on the head of a user without requiring a helmet or an elaborate head mounting for structural support.

Further, the photon generator 12 and modulator 14 can be separated from the scanner 16 and projection optics 18 so that only the scanner 16 and optics 18 need be mounted on the head of a user, the modulated photons being coupled to the scanner via one or more monofilament optical fibers.

In a preferred embodiment, microscanners are utilized to scan the photons, such microscanners being small, thin and deflected to scan the photons in response to an electrical drive or deflection signal. In accordance with one embodiment of the present invention as shown in FIG.

No lens is used to focus the beam to form a real image in front of the eye. Instead, the lens 29 of the eye focuses the beam to a point on the back of the retina, the position of the beam point scanning the retina as the scanner 16 scans the modulated photons. The angle of deflection of the collimated light beams corresponds to the position of the focused spot on the retina for any given eye position just as if an image were scanned at an infinite distance away from the viewer.

The intensity of the light is modulated by the video signal in order to create an image of desired contrast. Therefore, when the user's eye moves, the user will perceive a stationary image while he looks at different parts of the scene. The lateral extent of the image is proportional to the angle of the scan. Anamorphic optics are used as necessary to align the scanned photons and to scale the perceived image. By forming a reduced image of the scanner aperture, a proportionately larger scanning angle is yielded.

Other than this, the size of the scanner image is irrelevant as long as the light enters the eye. The cylindrical lens spreads the light beam from the photon generator 12 horizontally so that it fills the aperture of the acousto-optical deflector The spherical lens 32 horizontally collimates the light which impinges onto the acousto-optical deflector The acousto-optical deflector 34 is responsive to a video signal on a line 36 that is applied as a drive signal to a transducer of the acousto-optic deflector 34 to modulate the intensity of the photons or light from the photon generator 12 and to scan the modulated light from the photon generator 12 in a first direction or horizontally.

The video signal on line 36 is provided by a video drive system generally designated 38 that includes a video controller The video controller 42 may include a video generator such as a frame buffer 40 that provides video signals on a line 56 and respective horizontal sync and vertical sync signals. The video controller 42 may also include a microprocessor that operates in accordance with software stored in a ROM 46 or the like and utilizes a RAM 48 for scratch pad memory.

The horizontal sync signal from the video generator 40 is converted to a ramp wave form by a ramp generator 50, the horizontal sync ramp waveform is applied to a voltage controlled oscillator 52 that provides a signal in response to the ramp input having a frequency that varies such that it chirps.

The output from the voltage controlled oscillator 52 is applied to an amplifier 54 the gain of which is varied by the video data signal 56 output from the video generator 40 so that the video signal 36 output from the amplifier 54 has an amplitude that varies in accordance with the video information on line 56 and that has a frequency that varies in a chirped manner. The video signal on line 36 is applied to a drive transducer of the acousto-optical deflector Varying the amplitude of the drive signal on line 36 with the video information causes the acousto-optical deflector 34 to modulate the intensity of the light from the photon generator 12 with the video information.

Varying the frequency of the drive signal on line 36 in a chirped manner causes the acousto-optical deflector to vary the angle at which the light is deflected thereby so as to scan the light in a first or horizontal direction. A spherical lens pair 64 and 68 images the horizontally scanned light or photons onto a vertical scanner 62 wherein a cylindrical lens 68 spreads the light vertically to fill the aperture of the vertical scanner The vertical scanner 62 may for example be a galvanometer.

The vertical sync signal output from the video generator 40 is converted to a ramp waveform by a ramp generator 58 and amplified by an amplifier 60 to drive the vertical scanner The speed of scanning of the vertical scanner 62 is slower than the scanning of the horizontal scanner 34 so that the output of the vertical scanner 62 is a raster of photons.

This raster of photons is projected directly onto the eye 20 of the user by projection optics taking the form of a toroidal or spherical optical element 72 such as a refractive lens, mirror, holographic element, etc. The toroidal or spherical optical element 72 provides the final imaging and reduction of the scanned photons.

More particularly, the toroidal or spherical optical element relays the scanned photons so that they are coincident near the entrance pupil 26 of the eye Because a reduced image of the scanner aperture is formed, the deflection angles are multiplied in accordance with the Lagrange invariant wherein the field of view and image size are inversely proportional. As the size of the scanned photons, i. The optical element 72 can be an occluding element that does not transmit light from outside of the display system.

Alternatively the optical element 72 can be made light transmissive to allow the user to view the real world through the element 72 wherein the user perceives the scanned virtual image generated by the display 10 superimposed on the real world. Further, the optical element 72 can be made variably transmissive to maintain the contrast between the outside world and the displayed virtual image. A passively variable light transmissive element 72 may be formed by sandwiching therein a photochromic material that is sensitive to light to change the light transmissiveness of the element as a function of the ambient light.

An actively variable light transmissive element 72 may include a liquid crystal material. A photosensor can be used with such an element to detect the amount of ambient light wherein a bias voltage across the liquid crystal material is varied in accordance with the detected light to actively vary the light transmissiveness of the element The system described thus far with respect to FIG.

There were two groups of subjects: patients with macular degeneration, a degenerative disease of the retina and patients with keratoconus. Typical VRD images are on the order of nanowatts. VRD images are also readily viewed superimposed on ambient room light. In our low vision test subjects, 5 out of 8 subjects with macular degeneration felt the VRD images were better and brighter than the CRT or paper images and they were able to reach the same or better level of resolution.

All patients with Keratoconus were able to resolve lines of test several lines smaller with the VRD than with their own correction. Further, they all felt that the VRD images were sharper and easier to view.



0コメント

  • 1000 / 1000