July 17, 2017 § Leave a comment
In 1997, a couple of research groups discovered a gene now known as PTEN. The amino acid sequence of the protein it encoded resembled that of a protein tyrosine phosphatase, a similarity that implied PTEN might have a tumor-suppressing function. But the way the protein operated was not clear.
Jack E. Dixon, whose lab at the University of Michigan worked on protein tyrosine phosphatases, and his postdoc Tomohiko Maehama set out to determine PTEN’s substrate, which was integral to understanding how PTEN functioned. Other labs had been trying to determine what protein PTEN acted on, but with no luck. “It occurred to us, and this was I think one of the great insights, that maybe this protein tyrosine phosphatase isn’t really a protein phosphatase at all!” Dixon says. “Maybe it works on something else. And that turned out to be exactly correct.”
Dixon and Maehama discovered that in fact PTEN regulates the phosphorylation of a lipid, phosphatidylinositol 3,4,5-triphosphate (called PIP3 for short), a cell-growth-factor stimulant. PTEN was the first protein tyrosine phosphatase found to regulate a lipid instead of a protein.
The pair showed in a test tube that PTEN removes the phosphate from the 3 position on PIP3 to convert it to the nongrowth-stimulating PIP2. The resulting 1998 paper “was a particularly important one for shaping (scientists’) understanding of a key signaling pathway in normal cells,” says Eric Fearon, director of the University of Michigan Comprehensive Cancer Center. That pathway is the AKT signaling pathway, which is frequently disrupted in cancer.
Dixon and Maehama performed thin-layer chromatography and showed that PIP3 is downregulated as a result of transfecting in PTEN. “When we developed that TLC, down at the bottom of the gel, we could see PIP3 behave like we thought it would behave,” says Dixon. “That was a spectacular moment.”
By regulating PIP3, which is stimulated by insulin, PTEN serves as an important second messenger in cell signaling. “It’s like the brakes and the accelerator in a car,” Dixon says. “The accelerator is in this case insulin, and the brakes are PTEN. If you lose your brakes, you become tumorgenic.” PTEN downregulates PIP3 in cells without activating the growth enzyme phosphoinositide 3-kinase. PTEN is commonly found to be missing in cancers, including prostate and endometrial cancers; its absence is a common indicator that the tumor will grow quickly.
“Different scientific labs report different results, and when science is working well, one scientific publication is then the foundation for a subsequent one,” says Ramon Parsons, chairman of the department of oncological sciences at the Icahn School of Medicine at Mt. Sinai, who led one of the groups that discovered PTEN. Within six months, he says, a whole range of groups were able to confirm and build upon Dixon and Maehama’s findings.
Since whole-exome sequencing became possible in the past decade, Parsons says, it has become clear that PTEN is likely the most frequently mutated tumor suppressor besides P53 in all cancers.
Dixon and Maehama’s JBC paper is “certainly viewed as one of the seminal papers in the whole study of PTEN as a tumor-suppressor gene,” Fearon says.
It is now known that PTEN plays an important role outside of cancer in processes such as brain development. In fact, PTEN mutations have been tied to a subset of autism resulting from the uncontrolled growth of nerve fibers in the brain.
Dixon and Maehama’s 1998 paper “was the first paper to definitively establish the function of PTEN as a tumor-suppressor gene in cell signaling,” a critical step in exploring therapeutic effects in cancers, Fearon says. Small molecules are being studied to target defects in the AKT pathway.
“Even though it was a very short paper, I think it was the pivotal paper in highlighting the function of PTEN,” Fearon says. “It’s beautiful work, and extremely well done, which is why I think it’s stood the test of time.”
This post was written by Alexandra Taylor (alexandraataylor[at]gmail.com), a master’s candidate in science and medical writing at Johns Hopkins University. She writes about “Classic” articles in the Journal of Biological Chemistry. See more of her work in JBC here.
March 22, 2017 § Leave a comment
Researchers at Australia’s University of Queensland have identified a peptide from spider venom that can protect mice from brain damage if it’s given up to eight hours after an ischemic stroke. The researchers presented their work this week in the journal Proceedings of the National Academy of Sciences.
The only drug available to treat strokes is tissue plasminogen activator, or tPA, which works by breaking up the clots that cause ischemic strokes. At too high of a dose, however, tPA can induce hemorrhaging. Because of this risk, the drug is utilized for only about three percent of stroke cases worldwide.
Stroke is “the second-biggest cause of mortality in the world, and we don’t really have a drug to treat these patients,” says senior author Glenn F. King the University of Queensland’s Institute for Molecular Bioscience.
Ischemic strokes are more common than blood vessel-bursting hemorrhagic strokes and occur when an obstruction in the brain’s blood vessels prevents oxygen from reaching neurons. In the absence of oxygen, the neurons begin to break down glucose by anaerobic respiration. This creates lactic acid as a byproduct. The lactic acid causes a drop in local pH that leads to toxic acidosis and cell death.
King and colleagues had previously shown that the peptide PcTx1 found in the venom of the South American tarantula Psalmopoeus cambridgei was effective in preventing cell death in mice if given up to two hours after a stroke. According to King, a Ph.D. student in his lab who was performing genetic sequencing on the venom gland of the Hadronyche infensa spider happened to identify a molecule, Hi1a. Hi1a was strikingly similar to PcTx1. H. infensa is native to Australia and a relative of the deadly Sydney funnel-web spider, but its venom is much less lethal.
Hi1a has a structure similar to two PcTx1 molecules joined together but has a different mechanism of action that makes its binding much more difficult to reverse. When Hi1a binds to an ASIC1a channel, it prevents the channel from activating, which averts the neurotoxic death cascade from occurring.
To examine Hi1a’s ability to protect neurons from stroke damage, the researchers first synthesized the peptide in bacterial cultures. They then injected it into mice at two, four or eight hours after an ischemic stroke had been induced.
“What surprised me the most was how well it worked at eight hours,” says King. Even at four hours, he says, they were able to protect the area directly surrounding the clot that’s been believed to die “very quickly and very irreversibly. That’s never really been seen before.”
Jorge Ghiso at New York University’s Langone Medical Center, who was not involved in the study, notes the peptide’s long-acting ability to protect neurons. “It’s very promising in the sense that the molecule provides a wider therapeutic window than tissue plasminogen activator to efficiently reverse the damage produced by the ischemic stroke”, he says. The peptide “has been already tested up to eight hours after stroke onset, and it works in a very low dose, which are both encouraging findings for future preclinical studies.”
King plans to examine the peptide’s activity over longer period of times, and hopes once the peptide’s ability to treat hemorrhagic strokes has next been examined, it could move into clinical trials within the next 18 months to two years. He envisions it eventually be implemented into a medication that would be a boon to rural patients who live far from medical centers.
“They’re going to get moved into a city hospital, and during that time, the brain is just dying,” he says. A drug that could treat both ischemic and hemorrhagic strokes “gives the first responders the opportunity to give the drug without any triage, and that’s going to really save a lot of neurons.”
This post was written by John Arnst, ASBMB Today’s science writer
March 16, 2017 § Leave a comment
The humble tardigrade, an organism whose name means “slow stepper,” has long been known to survive bursts of ultraviolet radiation, freezing temperatures, the vacuum of space and extreme droughts. But, until now, the mechanisms by which these creatures do so have remained unclear. In a paper published today in the journal Molecular Cell, researchers at the University of North Carolina, Chapel Hill, report that intrinsically disordered proteins unique to tardigrades, who are also known as “water bears,” are responsible for the organisms’ ability to survive extreme desiccation.
As tardigrades dry out, they crank up their production of intrinsically disordered proteins, which lack three-dimensional structures. As the drying progresses, these proteins vitrify around internal cellular components, forming an amorphous glasslike solid.
“It’s a lot more gentle on the cell,” says lead author Thomas Boothby. The solid prevents proteins that are sensitive to desiccation from denaturing and aggregating; otherwise, these proteins would form crystals that would shred DNA and cell components once water is added back to the system.“What we envision is happening is that membranes and proteins are basically being coated in these disordered proteins that form a glassy matrix around them.”
According to Boothby, one of the competing theories has been that tardigrades use the sugar trehalose to form the glassy matrices that protect their cells. In animals that use trehalose to survive desiccation, such as brine shrimp, the sugar makes up around twenty percent of body weight; the concentration in tardigrades has been observed at about 2 percent. “When you couple that with genetic evidence that tardigrades don’t have the enzyme to make trehalose, it makes us think that they’re probably not producing the sugar themselves. They’re probably getting a little bit of it from their food source,” says Boothby.
When the researchers ran a differential gene analysis on tardigrades that had been subjected to gradual drying, they noticed 11 cytosolic heat-soluble protein transcripts, 19 secreted heat-soluble protein transcripts and two mitochondrial heat-soluble transcripts that were significantly enriched compared with hydrated conditions. All three of these protein families are believed to encode for intrinsically disordered proteins in tardigrades.
This is the first observation that intrinsically disordered proteins confer protection against desiccation in tardigrades, though nearly all organisms contain intrinsically disordered proteins. When the researchers expressed the genes that code for the tardigrade-specific intrinsically disordered proteins in Escherichia coli and Saccharomyces cerevisiae, they found that the organisms exhibited a hundredfold increase in their ability to tolerate desiccation.
“The finding that tardigrade disordered proteins are crucial for the ability of the members of the animal kingdom to survive during extreme desiccation concurs with previous work on the plant desiccation resistance that was shown to be critically dependent on several specific intrinsically disordered proteins,” says Vladimir Uversky at the University of South Florida. “The ability of tardigrade disordered proteins to vitrify represents a novel intrinsic-disorder-based molecular mechanism of protection of biological material from desiccation.”
Boothby and colleagues also noted that when tardigrades were subjected to freezing conditions instead of desiccation, the organisms activated an entirely different set of genes.
Boothby and colleagues are currently exploring the differences between which genes tardigrades activate for different harsh conditions. “Figuring out if they have just general tricks for surviving all these different stresses or if they use specific mechanisms to survive each individual stress is a really interesting question,” he says. “(It) can help us to understand how these different stress tolerances evolved as well as how the animals do them.”
This post was written by John Arnst, ASBMB Today’s science writer.
January 26, 2017 § Leave a comment
Sex is determined in mammals, birds and a subset of fish, primarily by a pair of chromosomes known as the sex chromosomes. Wild-type zebrafish have sex chromosomes but their domesticated counterparts depend on polygenic sex determination, in which the responsible genetic factors for sex are distributed across the whole genome. Polygenic sex determination makes sexual differentiation more unstable because it permits environmental cues to play a greater role in sexual development. However, polygenic sex determination is less understood than sex-chromosomal determination.
In a paper published in Proceedings of the National Academies of Sciences on Jan. 23, researchers at the Temasek Life Sciences Laboratory in Singapore and Institute of Marine Sciences in Spain, have examined the transcriptomal changes that occur when domesticated female zebrafish transition into males in response to warm water. A transcriptome consists of the total mRNA in a cell that codes for proteins.
Timothy Karr, a developmental biologist at Arizona State University who was not involved in the study describes it “as one of the first studies of its kind.”
Zebrafish are native to the Indian subcontinent and, for more than 40 years, have been used as a model organism for biological research. While many fish display sexual plasticity due to environmental cues, domesticated zebrafish in this study are the first to be observed to retain female gonads while displaying male reproductive genes and proteins, rather than transitioning fully to something known as a neomale. “A neomale is an individual that’s genetically programmed to become a female, but as a result of the temperature treatment, becomes male,” says László Orbán at the Temasek Life Sciences Laboratory.
The instability of polygenic sex determination in zebrafish came across as an unintentional side-effect of cultivating distinct familial lines for research over the past four decades.
“Somehow, the sex chromosomes have been lost during the domestication process,” says Orbán. While there had been controversy as to whether zebrafish sex was more strongly determined by inherited chromosomes or polygenic cues, the split between wild and domestic families was confirmed about two years ago. Researchers examined zebrafish they had retrieved from northern India and found that the wild fish still displayed sex-chromosomal determination.
Within domesticated zebrafish, the family lines develop different ratios of males to females. Francesc Piferrer’s lab at the Institute of Marine Sciences, which had previously examined the effects of temperature on fish sex ratios and helped design the study, subjected a variety of zebrafish families to water at 36° C during a window of 18 and 32 days post-fertilization. The Orbán lab members then used microarrays to identify the differences in transcriptomes between the zebrafish males and females that been experienced control and heat-exposed conditions.
Examining the transcriptomes of these fish allowed the researchers to then identify which had become neomales or pseudofemales which have ovaries but a male-like transcriptome. The researchers found that the pseudofemales displayed gonadal transcriptomes that only differed from genuine male transcriptomes by a few thousand genes. “It looks like a reprogramming process that doesn’t complete,” says Orbán. “The details are not known to us so there’s a whole area of science opening up here.”
“If it can be replicated, the authors’ claim to have discovered ‘male-like’ transcriptomes in females with morphologically developed ovaries, would be an extraordinary finding, but perhaps not for the reasons the authors envision in this study,” Karr notes. “It would be one of the most persuasive arguments” against the dominance of chromosome-based sex determination in developmental and evolutionary biology.
This post was written by John Arnst, ASBMB Today’s science writer.
December 8, 2016 § Leave a comment
Type I diabetes occurs when the body’s immune system destroys the beta cells which produce insulin in the pancreas. While insulin pumps and blood monitoring systems have come a long way since B.B. King was touting new devices that didn’t hurt his fingers, the disease, which affects more than 40 million people worldwide, is still almost entirely managed with injections of insulin. This can cause health problems and lower quality of life when patients take an improper dose of insulin.
In efforts to replace the destroyed beta cells, researchers report in a paper just published in the journal Science that they have transformed cultured human embryonic kidney-293 cells into functional mimics of the human pancreatic beta cells.
Human pancreatic islets are currently the gold standard in beta-cell replacement therapy, but are difficult to maintain in cell culture and often in short supply. The researchers wanted to explore alternatives to the replacement therapy.
The researchers noticed that beta cells measure blood glucose levels metabolically rather than rely on a dedicated receptor that counts the number of glucose molecules near the plasma membrane. The cells use transport proteins to draw glucose in before metabolizing the sugar, which causes the ATP level to increase. This increase in the ATP level depolarizes the membrane by closing potassium channels. The closing of the potassium channels causes calcium channels to open. The subsequent calcium influx sets off a voltage-gated calcium-dependent signaling cascade which then kicks out the granules containing the insulin.
“We found that all it takes to turn a HEK cell into a beta cell is expressing the voltage-gated calcium channel,” says Martin Fussennegger at the Swiss Federal Institute of Technology. Because the HEK-293 cells already have channels for glucose and potassium, Fussenegger and colleagues modified them to express the voltage-gated calcium channel as well as produce insulin in response to it.
To test the human-derived artificial beta cells, the researchers encapsulated them in alginate beads to protect them from the mouse immune system. “We put them in kind of a teabag,” says Fussenegger.
They then injected the artificial cells into the body cavities of mice with type I diabetes, where the cells joined up to the bloodstream. Over a three-week period, the researchers saw that the artificial cells restored glucose homeostasis more reliably than encapsulated beta-cell islets from organ donors and more efficiently than encapsulated cells from a human beta-cell line called 1.1E7. They also noted that these artificial beta cells showed higher insulin secretion capacity in cell culture than both the 1.1E7 beta cells and the human pancreatic islets.
While this particular replacement therapy would be several years off because it has to undergo clinical trials, Fussenegger is optimistic about how it would work for patients. “Every four months you would need to replace this cell-based self-containing teabags by new implants,” says Fussenegger. The procedure, which would consist of a small incision, could be done by a primary care physician. “As a diabetic, either type I or type II, you could have a pretty normal life during the four months, then you have a little replacement of your implant,” he says. “These kind of cells could take over from the pancreas and could control your insulin in response to the glucose levels in your blood.”
This post was written by John Arnst, ASBMB Today’s science writer.
December 5, 2016 § Leave a comment
When you think ‘oscillations in your gut,’ you might think of motion sickness or food poisoning. But there is another type of oscillations in both the gut and other organs. These oscillations are an interplay of genetic switches for protein expression that turn on and off throughout the day as we transition through eating, working, exercising and sleeping. In a paper published in Cell on Dec. 1, researchers in Israel have investigated the links between the day-and-night circadian rhythm in mice and the microbes that thrive in their gastrointestinal tracts. They’ve found that the daily fluctuations that occur between the two systems are more intimately linked than previously expected.
The systems close to one another but tightly coordinated, like a “tango between two partners,” says Eran Elinav at the Weizmann Institute of Science in Rehovot, Israel. Elinav and colleagues had previously linked shifts in our day-and-night circadian rhythm, such as jet lag, to disruptions of microbial gut communities in humans and mice that can lead to metabolic conditions, such as obesity and diabetes.
In their Cell paper, Elinav and colleagues, including Eran Segal from the same institute, homed in on the microbes that adhere to the epithelial cells of the gastrointestinal tract in mice with imaging and sequencing. They targeted the microbes’ entire genome. The targeting allowed the researchers to determine both the composition and function of the microbes.
They found that the bacteria residing in close proximity to the host lining epithelial cells displayed highly circadian behavior, with composition, function and microbial numbers differing at throughout a 24-hour cycle. Moreover, the thickness of the mucosal layer separating the gut bacteria from the mice’s epithelial layers fluctuated with the mice’s circadian rhythm and feeding patterns.
“Our findings also add to the increasing body of evidence that strongly suggests that disruption of proper circadian activity, such as that present in shift workers and frequent travelers, may drive metabolic derangements through a mechanism that is partly mediated by disruption of proper diurnal microbiome activity,” says Segal.
The investigators then wiped out these microbial communities with antibiotics to see how the mice’s transcriptome, the aggregate of genes being expressed via messenger RNA, adapted to the loss.
“There were a few hundred genes — the genes encoding the host clock itself — which did not care about the disruption to the microbiome,” says Elinav. “But there was another group of genes which normally oscillate in the host. Once we disrupted the gut microbes, these oscillations were completely lost.”
The investigators also noted that a subset of mouse genes that normally operate independently of the mice’s circadian oscillations began to follow the oscillations after the microbial communities were wiped out. These genes were picking up functions that had previously been performed by genes expressed by the microbiome. “This brings the option that this “superorganism” shifts the tasks from one partner to the other once it is disrupted,” says Elinav.
The researchers were most shocked when they checked up on the mice’s livers. Despite the liver’s relative distance from the gastrointestinal tract, about 15 to 20 percent of its genes displayed circadian activity. “Surprisingly, when we disrupted the gut microbiome, the genetic program in the liver was severely disrupted,” says Elinav.
The researchers found that the metabolites, small molecules that are extensively modified by the microbiome and make up 80 percent of all the small molecules in peripheral blood, also displayed strong circadian activity. These molecules allow the gut microbiome to regulate the circadian activity in the liver.
When this was brought into the context of drug metabolism, the researchers found that liver toxicity induced by administration of high doses of the painkiller acetaminophen also displayed circadian activity. Interestingly, the researchers found that disrupting the gut microbiome reduced the toxicity of acetaminophen and stabilized it throughout a 24-hour period.
Elinav and colleagues are currently planning to continue investigating the intimacy of the gut microbiota and the effects of its diurnal activity in humans in order to elucidate potential systemic effects of antibiotics. They want to develop rational and safe intervention methods in the microbiome, potentially impacting human disease and drug metabolism.
This post was written by John Arnst, ASBMB Today’s science writer.
November 22, 2016 § Leave a comment
Swabbing a phone for chemical signatures.
Credit: Amina Bouslimani and Neha Garg, UCSD
It used to be that the most troubling information you could get from swabbing someone’s phone case was an abundance of E. coli indicating his or her lack of good hygiene. In a paper published in Proceedings of the National Academies of Sciences on Nov. 14, researchers at the University of California, San Diego, expanded the scope of interrogation to include a number of trace chemical signatures. The signatures can give a picture of someone’s lifestyle.
“The number of molecules detected on every object will vary depending on the surface of the object and the lifestyle of these people,” says Amina Bouslimani at UCSD. Bouslimani is a postdoctoral researcher in the laboratory of Pieter Dorrestein and the first author on the PNAS study, which was funded by the National Institute of Justice, the research arm of the U. S. Department of Justice. “For every phone, we were able to detect between hundreds and thousands of molecules or compounds,” she continues.
Bouslimani and colleagues swabbed the phones and hands of 39 volunteers. They then paired mass spectrometry with a visualization process known as molecular networking. This allowed the researchers to group similar molecules and identify unknown molecules absent from a reference database based on their similarity to known compounds.
The researchers detected a 69 percent overlap between the samples taken from participants’ hands and the backs of their phones, which demonstrated a high transferability of chemicals between the two surfaces. Among many other food items, pharmaceuticals and hygiene products, the compounds detected corresponded to citrus fruits, caffeine, antidepressants, antifungal creams, hair-loss treatments, sunscreen and mosquito-repelling DEET.
The researchers also evaluated each participant’s potential exposure to flame-retardant plasticizing agents. They posited that this analysis could be used to monitor exposure to additional environmental hazards.
While the approach is not a replacement for DNA or fingerprint analyses, Bouslimani and colleagues hope that it might fill in gaps when DNA samples are contaminated or fingerprints recovered are only partials or not in a database.
“This work is exciting and very thought-provoking,” says Glen Jackson at West Virginia University, an expert in forensic analyses by mass spectrometry.
Jackson is cautious, however, about the accuracy of linking predicted activities with mass spectrometry-confirmed exposure to chemicals.
For example, while the presence of DEET based on data analysis may be very reliable information, he says, “proving that the lifestyle, or activity level, of the suspect is camping versus gardening is a different proposition altogether.” He added that there’s more work to be done to make sure that the results of such testing aren’t misconstrued.
The strength of the approach, according to Bouslimani, is the aggregate of the individual chemical signatures. “Our work flow doesn’t just detect one unique compound on this phone,” she says. “It is the combination of many such lifestyle chemistries that will help us to understand the personal habit and lifestyle.”
Bouslimani and colleagues hope to expand the breadth of their database, which would require the efforts of outside collaborators. “It has to be now a community effort,” she says. “We really hope that other people will start to apply this technology, to take this kind of development to the next level in forensic application.”
In the meantime, Bouslimani and colleagues plan to expand the study to include 80 people and each subjects’ keys, computers and wallets.
This post was written by John Arnst, ASBMB Today’s science writer.
October 4, 2016 § Leave a comment
What do skunks, decomposing cadavers and garlic have in common? Their odors contain sulfur. Humans are very sensitive to those odors. We’re able to pick up mere whiffs of them.
A team of researchers now report that they have found the olfactory receptor that gives us this exquisite sensitivity to sulfur-containing compounds. The receptor, known as OR2T11, requires metals for its activation. The finding, published in the Journal of the American Chemical Society, is the first to report the activation of a human olfactory receptor solely by metals.
Genetically speaking, olfactory receptors come in large numbers. There are about 400 genes for olfactory receptors in humans and 1,200 in mice. Figuring out which olfactory receptor picks up which scent is a challenge.
Finding OR2T11 demanded a multifaceted team. Victor Batista at Yale University, Lucky Ahmed and their group brought computational modeling expertise to the project. Eric Block at the University at Albany, State University of New York, is a chemist interested in organosulfur compounds and their smells. Jessica Burger at the National Institute of Standards and Technology in Boulder, Colorado, is a specialist in nuclear magnetic resonance spectroscopy. Hanyi Zhuang at the Shanghai Jiaotong University School of Medicine in China and colleagues are neuroscientists with expertise in olfactory receptors.
Zhuang’s laboratory has a cell-based system that can effectively express olfactory receptors. She says, “In this study, this platform enabled high-throughput screening of a human odorant receptor library and led to the discovery of the highly responsive thiol receptor OR2T11.”
The investigators discovered that OR2T11 was particularly sensitive to picking up tertiary-butyl mercaptan, also named 2-methyl-2-propanethiol, as well as ethanethiol. These two compounds are interesting because they are used in the fuel industry. With fuels, a very serious problem, called odor masking, occasionally arises. “Fuels sometimes cannot be smelled because of a combination of intermolecular and physiological interactions,” says Burger. “Utilities purchase fuel gas but, upon delivery, sometimes notice that there is little detectable or recognizable odor.”
Tertiary-butyl mercaptan is the main odorizing agent added to highly flammable natural gas, which is itself odorless. Ethanethiol gets added to liquified petroleum gas, which is also odorless and flammable. Both compounds give humans a warning smell if the fuels escape from containers.
The investigators also discovered that the receptor is activated by copper or silver. Although Zhuang says that the possible involvement of metal in olfaction has been proposed by chemists even before the cloning of the odorant receptors, Block adds, “there was only very limited experimental evidence for the role of metals in olfaction.”
The Batista group’s computations backed up the investigators’ experimental findings. The computations, Batista says, “enabled us to build a fully atomistic molecular model of the human odorant receptor with copper or silver binding sites for organosulfur compounds” that were consistent with experimental observations.
The investigators also were struck by how OR2T11 responds only to low-molecular-weight thiols — those with five or fewer carbon atoms — even though thiols with six or more carbons also have strong odors. Block says the finding indicates that size matters when dealing with particular classes of odorants interacting with their most responsive olfactory receptors.
Given that ionic and nanoparticulate silver can activate the receptor and enhance its sensitivity to sulfur-based compounds, Block notes there can be environmental issues. The finding “is a potentially important observation given that there are concerns about nanoparticulate metals in the environment, for example in bodies of water, which could impact olfaction in fish,” he says.
August 25, 2016 § Leave a comment
In healthy people, the adrenal glands putter away atop the kidneys, releasing hormones as needed. For most people with congenital adrenal hyperplasia, the adrenal glands produce the glucocorticoid class of hormones, such as cortisol and corticosterone, in greatly diminished quantities.
The standard treatment for the disorder is hormone replacement therapy with hydrocortisone, the pharmaceutical version of cortisol. However, this tends to cause unpleasant side effects, such as obesity, hypertension and cardiovascular disease. In a paper just out in the journal Science Translational Medicine, researchers at the University of Edinburgh have found that using corticosterone can be just as effective as standard hydrocortisone, with fewer side effects.
Congenital adrenal hyperplasia affects about 1 out of every 10,000 people. When left unchecked, it can manifest as adrenal insufficiency, which can cause fatigue, depression, vomiting, severe abdominal pains and mood disorders. It can also lead to increased synthesis of adrenal androgens because adrenal androgens and glucocorticoids share precursor building blocks, which can have adverse effects on the development of primary or secondary sex characteristics in women.
Most cases of congenital adrenal hyperplasia are the result of a deficiency in 21-hydroxylase, which participates in the pathways that produce cortisol. Under stressful conditions, the anterior pituitary gland releases a molecule called adrenocorticotropic hormone, or ACTH. The ACTH travels to the kidney’s adrenal glands and stimulates the production of 17-hydroxyprogesterone, which is modified by 21-hydroxylase to ultimately become the stress-response hormone cortisol.
When glucocorticoid production is insufficient, as it is in congenital adrenal hyperplasia, 17-hydroxyprogesterone starts to accumulate and is diverted to the synthesis of adrenal androgens, such as testosterone. This also causes a buildup of ACTH, a precursor to the adrenal androgen, and also can cause unnatural darkening of the skin.
Treatment for these glucocorticoid deficiencies involves capsules of hydrocortisone, the pharmaceutical version of cortisol. But the treatment treads a fine line with cortisol cytotoxicity, according to Brian Walker, a professor at the University of Edinburgh and the primary investigator on the Science Translational Medicine paper.
The best way to suppress the overproduction of androgens “is to suppress this ACTH, but if you suppress the ACTH you almost always end up giving a dose that produces adverse side effects,” says Walker.
Because those side effects are mediated in the adipose tissue, the researchers decided to scrutinize the presence of cortisol and corticosterone in human adipocytes, the cells that store energy as fat.
After examining the adipocytes’ expression of the ATP-binding cassette transporters ABCB1 and ABCC1, which were known to export cortisol and corticosterone respectively, the researchers found that ABCC1 was dominant in the adipocytes. The transporter clears the cells of corticosterone. This suggested corticosterone, not cortisol, would be the better choice for replacement therapy; corticosterone wouldn’t stimulate activity in the cells and cause side effects.
To test this, Walker and his colleagues recruited two groups of six individuals with Addison’s disease, a disease similar to congenital adrenal hyperplasia. The investigators chose Addison’s disease patients over those with congenital adrenal hyperplasia to avoid any confounding effects of high androgen concentrations in the adipose tissues. They gave the patients either cortisol or corticosterone via short-term infusions.
Walker and colleagues found that while corticosterone treatments weren’t any more effective at suppressing circulating ACTH than cortisol treatments, they did reduce the presence of biomarkers for pathways in adipose tissues that lead to fat accumulation and hypertension.
Based on these results, Walker says, “We think it would be worth developing a treatment using corticosterone rather than cortisol or hydrocortisone. We are anticipating that that would have fewer side effects mediated in the adipose tissue for a dose that is equally efficacious in other tissues.”
At present, the researchers are working on developing such a therapy, Walker says, “because it doesn’t exist at present. We only have hydrocortisone tablets.”
This post was written by John Arnst, ASBMB Today’s science writer.
June 30, 2016 § Leave a comment
Seven milliliters of a king cobra’s venom can kill 20 people. But what exactly is in the snake’s venom? Researchers have pursued that question for decades.
Now, in a paper published in the journal Molecular & Cellular Proteomics, a team of researchers reveals a detailed account of the proteins in the venom of king cobras. “I believe this study to be one of the most complete and precise catalogues of proteins in a venom yet obtained,” states Neil Kelleher at Northwestern University, one of the study’s senior investigators.
Snake venoms always have intrigued scientists, because they “have a rich diversity of biological activities,” says Kelleher’s collaborator Gilberto Domont at Universidade Federal do Rio de Janeiro in Brazil. Among other things, venoms contain various proteases, lipases, nerve growth factors and enzyme inhibitors. Besides understanding how venoms function, researchers want to develop better antidotes to snake venom and identify molecules from venom that can be exploited as drugs, such as painkillers, anticlotting medications and blood pressure treatments. Domont points to captopril, a drug now commonly used to treat high blood pressure and heart failure. It was derived from a molecule found in the venom of a poisonous Brazilian viper.
Although the venom of the king cobra, the largest venomous snake in the world, which can stretch up to 13 feet, has been analyzed previously, questions persist about the venom. How do the sequences of the toxins evolutionarily vary? How do some post-translational modifications on proteins make the venom lethal? But to answer these questions, researchers need a proper count of the proteins in king cobra venom.
The advent of proteomics has allowed scientists to survey the rich diversity of proteins in a given sample. There are different approaches that rely on mass spectrometry to carry out proteomic analyses. One approach is called top-down proteomics. It allows researchers to look at proteins as whole, intact entities. In the more conventional approach, called bottom-up proteomics, proteins are cut into bite-sized fragments for analysis.
In bottom-up proteomics, researchers have to use computer algorithms to stitch back together protein fragments identified by mass spectrometry. Top-down proteomics avoids this problem. Its biggest advantage is that it can capture variations within the proteins as well as post-translational modifications.
Kelleher’s group is one of the leaders in developing top-down proteomics, so that’s what the investigators decided to use to analyze king cobra venom. Domont, Kelleher, Domont’s graduate student Rafael Melani and colleagues obtained venom from two Malaysian king cobras held at the Kentucky Reptile Zoo. They analyzed the venom by top-down proteomics in two modes, denatured and native. In the denatured mode, the protein complexes were taken apart; in the native mode, the venom was kept as is so the protein complexes remained intact.
The investigators identified 113 proteins in king cobra venom as well as their post-translational modifications. To date, only 17 proteins had been known in king cobra venom.