laboratory genetic testing, DNA sequencing

History of Genetic Testing – From Counting Chromosomes to NGS

In the 1950s, scientists using early DNA research techniques to look at chromosomes discovered that Downs Syndrome was caused by the presence of an additional copy of chromosome 21. This was the very beginning of genetic testing as we know it today.

Over the years, pregnancy DNA testing and screening have progressed from these earliest methods of chromosome counting and sorting to become some of the most advanced testing and analysis processes in the sphere of medical science.

Here we look briefly at the development of this important branch of medicine.

Karyotyping

Every human being has a unique karyotype (an individual set of chromosomes).

Karyotyping is the name given to the laboratory process which first enabled an image of a person’s chromosomes to be created. The process, developed in the mid-1950s, used microscopes and medical dyes to enable the isolation and photography of individual chromosomes. Scientists could then arrange these images in numerical order and look for abnormalities and mutations.

In 1958, Dr Jerome Lejeune, identified the cause of Trisomy 21 (then known as Mongolism, now known as Down’s Syndrome) through karyotyping when he observed an extra chromosome in pair 21.

His findings were published in 1959 and, for the first time, doctors across the world became aware of the link between an intellectual disability and a chromosomal abnormality. Perhaps most importantly, it established that Trisomy 21 was not hereditary.

Dr Lejeune became the first Professor of Fundamental Genetics at the Faculty of Medicine in Paris and is often described as the father of modern genetics.

Sanger sequencing

DNA molecules are made up of four nucleotides linked together in what can be described as a DNA sentence. These “sentences” within each cell contain the instructions for building the proteins and other molecules that the cell needs to carry out its daily work.

The first DNA sequencing technology was developed by Frederick Sanger in the 1970s. Also known as the “chain determination method”, it enables the identification of the nucleotide sequence in DNA (the order that the nucleotides appear in the chain of DNA molecules) which allows scientists to “read” the gene sequence of an individual.

In Sanger sequencing, the process of DNA replication is recreated; firstly, copies are made of the DNA strands to be examined, then as replication occurs the scientists can see which order the nucleotides are added during the sequence.

Due to its overall efficiency and low radioactivity, Sanger sequencing became the primary technology used in the early days of commercial and laboratory genetic sequencing.

However, the first format of Sanger sequencing was a very lengthy process. Now, new methods have been developed to advance Sanger’s original method and reduce the time in which DNA can be sequenced. By using fluorophores (detection reagents) and computers, a human’s entire DNA can now be sequenced in a few days rather than the 13 years it took during the original Human Genome Project.

Fluorescent in-situ hybridisation (FISH)

The first in situ hybridisation (ISH) techniques were developed in the 1960s by Joseph Gall and Mary Lou Pardue, using radioactive materials to determine the chromosomal location of hybridised nucleic acid using DNA probes (a single strand of DNA that scientists use to search for the complementary sequence in a sample).

In the 1980s, scientists began using fluorescent dyes to label the probes and FISH became a standard tool to detect the presence or absence of specific DNA sequences on chromosomes.

This laboratory-based method is used in prenatal and postnatal investigations to identify specific chromosomal abnormalities which could indicate a genetic condition.

Comparative genomic hybridisation (CGH)

CGH is a laboratory-based process that enables the detection of chromosomal copy number variations (CNVs). A reference sample of DNA and genomic DNA are labelled with fluorescent dyes and compared microscopically. The first recorded instance of CGH analysis was in 1992 at the University of California (Kallioniemi et al).

CGH enables the detection of smaller genetic changes than is possible via conventional karyotyping. It provides accurate data on the size of duplications and deletions identified and the possible consequences.

Array CGH (aCGH) and chromosomal microarray analysis (CMA) have been developed to enhance sensitivity and increase resolution during sample analysis.

CGH is mainly used to detect genomic abnormalities in cancer tumours, whereas aCHG and CMA are used in the analysis of DNA CNVs known to cause human genetic disorders.

MLPA

Multiplex Ligation-dependent Probe Amplification is a lab-based technique which finds deletions or duplications in one part of a gene or more. It is used to detect copy number variations (CNVs) in specific regions of the genome (the entire set of DNA instructions found in a cell) that are of interest or associated with a particular genetic abnormality or condition.

Developed by Dutch scientist Jan Schouten, his ground-breaking paper on MLPA testing was published in 2002 in the journal Nucleic Acid Research. Some of the first applications for the testing were in the detection of exon deletions linked to specific types of hereditary cancer.

MLPA has become a reliable and cost-effective method for detecting known hereditary conditions and for tumour profiling. It is also a useful tool in pre-natal and post-natal diagnosis of genetic disorders.

Next Generation Sequencing

A sequence-based approach in genetic testing directly determines the nucleic acid sequence of DNA molecules. The Human Genome Project used the Sanger sequencing method; however, it was slow and incredibly costly.

Although groundbreaking in its effect on the study of DNA, the Sanger method is only able to sequence a single DNA fragment at a time, whereas next-generation sequencing (NGS) (also known as massively parallel sequencing) is able to sequence millions of fragments in each testing run. Thus enabling higher testing throughput at a lower cost.

Second-generation sequencing technologies began to emerge between 1994 and 1998, with the first commercially available NGS instruments appearing in 2005. NGS is now used as an umbrella term for a number of modern sequencing technologies. It has revolutionised the fields of genomics and molecular biology.

Find out more: