Singularity

www.designnews.com

One key objective for scientists developing robots is to provide them with a sense of touch similar to that of humans so they can grasp and manipulate objects in a way that's appropriate to the objects' composition. Researchers at Queen Mary University of London have developed a new low-cost sensor that can measure parameters directly that other sensors often don't take into consideration in order to achieve a higher measurement accuracy, they said. "The L-3 F-TOUCH measures interaction forces directly through an integrated mechanical suspension structure with a mirror system achieving higher measurement accuracy and wider measurement range," he said. "The sensor is physically designed to decouple force measurements from geometry information. Therefore, the sensed three-axis force is immunized from contact geometry compared to its competitors." **Paper** [L3 F-TOUCH: A Wireless GelSight With Decoupled Tactile and Three-Axis Force Sensing](https://ieeexplore.ieee.org/document/10173594?source=authoralert) **Abstract** GelSight sensors that estimate contact geometry and force by reconstructing the deformation of their soft elastomer from images would yield poor force measurements when the elastomer deforms uniformly or reaches deformation saturation. Here we present an L 3 F-TOUCH sensor that considerably enhances the three-axis force sensing capability of typical GelSight sensors. Specifically, the L 3 F-TOUCH sensor comprises: (i) an elastomer structure resembling the classic GelSight sensor design for fine-grained contact geometry sensing; and (ii) a mechanically simple suspension structure to enable three-dimensional elastic displacement of the elastomer structure upon contact. Such displacement is tracked by detecting the displacement of an ARTag and is transformed to three-axis contact force via calibration. We further revamp the sensor's optical system by fixing the ARTag on the base and reflecting it to the same camera viewing the elastomer through a mirror. As a result, the tactile and force sensing modes can operate independently, but the entire L 3 F-TOUCH remains L ight-weight and L ow-cost while facilitating a wireless deployment. Evaluations and experiment results demonstrate that the proposed L 3 F-TOUCH sensor compromises GelSight's limitation in force sensing and is more practical compared with equipping commercial three-axis force sensors. Thus, the L 3 F-TOUCH could further empower existing Vision-based Tactile Sensors (VBTSs) in replication and deployment.

8
1
https://www.marktechpost.com/2023/08/20/researchers-at-stanford-crack-the-code-of-natural-vision-as-new-model-reveals-how-eyes-decode-visual-scene/

In a recent research paper, a group of researchers has made a significant advancement by showing that a three-layer network model is capable of predicting retinal responses to natural sceneries with amazing precision, almost exceeding the bounds of experimental data. The researchers wanted to understand how the brain processes natural visual scenes, so they focused on the retina, which is part of the eye that sends signals to the brain. **Paper** [Interpreting the retinal neural code for natural scenes: From computations to neurons](https://www.sciencedirect.com/science/article/pii/S0896627323004671) **Abstract** Understanding the circuit mechanisms of the visual code for natural scenes is a central goal of sensory neuroscience. We show that a three-layer network model predicts retinal natural scene responses with an accuracy nearing experimental limits. The model’s internal structure is interpretable, as interneurons recorded separately and not modeled directly are highly correlated with model interneurons. Models fitted only to natural scenes reproduce a diverse set of phenomena related to motion encoding, adaptation, and predictive coding, establishing their ethological relevance to natural visual computation. A new approach decomposes the computations of model ganglion cells into the contributions of model interneurons, allowing automatic generation of new hypotheses for how interneurons with different spatiotemporal responses are combined to generate retinal computations, including predictive phenomena currently lacking an explanation. Our results demonstrate a unified and general approach to study the circuit mechanisms of ethological retinal computations under natural visual scenes.

10
0
www.assemblyai.com

Language models (LMs) are a class of probabilistic models that learn patterns in natural language. LMs can be utilized for generative purposes to generate, say, the next event in a story by exploiting their knowledge of these patterns. In recent years, significant efforts have been put into scaling LMs into Large Language Models (LLMs). The scaling process - training bigger models on more data with greater compute - leads to steady and predictable improvements in their ability to learn these patterns, which can be observed in improvements to quantitative metrics. In addition to these steady quantitative improvements, the scaling process also leads to interesting qualitative behavior. As LLMs are scaled they hit a series of critical scales at which new abilities are suddenly “unlocked”. LLMs are not directly trained to have these abilities, and they appear in rapid and unpredictable ways as if emerging out of thin air. These emergent abilities include performing arithmetic, answering questions, summarizing passages, and more, which LLMs learn simply by observing natural language. What is the cause of these emergent abilities, and what do they mean? In this article, we'll explore the concept of emergence as a whole before exploring it with respect to Large Language Models. We'll end with some notes about what this means for AI as a whole. Let's dive in!

5
0
www.technologynetworks.com

Efforts to restore speech to people silenced by brain injuries and diseases have taken a significant step forward with the publication of two new papers in the journal Nature. In the work, two multidisciplinary teams demonstrated new records of speed and accuracy for state-of-the-art, AI-assisted brain-computer interface (BCI) systems. The advances point the way to granting people who can no longer speak the ability to communicate at near conversation-level pace and even show how that text can be retranslated into speech using computer programs that mimic the patient’s voice. One group developed a digital avatar that a paralyzed patient used to communicate with accurate facial gestures. **Paper** [A high-performance speech neuroprosthesis](https://www.nature.com/articles/s41586-023-06377-x) **Abstract** Speech brain–computer interfaces (BCIs) have the potential to restore rapid communication to people with paralysis by decoding neural activity evoked by attempted speech into text1,2 or sound3,4. Early demonstrations, although promising, have not yet achieved accuracies sufficiently high for communication of unconstrained sentences from a large vocabulary1,2,3,4,5,6,7. Here we demonstrate a speech-to-text BCI that records spiking activity from intracortical microelectrode arrays. Enabled by these high-resolution recordings, our study participant—who can no longer speak intelligibly owing to amyotrophic lateral sclerosis—achieved a 9.1% word error rate on a 50-word vocabulary (2.7 times fewer errors than the previous state-of-the-art speech BCI2) and a 23.8% word error rate on a 125,000-word vocabulary (the first successful demonstration, to our knowledge, of large-vocabulary decoding). Our participant’s attempted speech was decoded at 62 words per minute, which is 3.4 times as fast as the previous record8 and begins to approach the speed of natural conversation (160 words per minute9). Finally, we highlight two aspects of the neural code for speech that are encouraging for speech BCIs: spatially intermixed tuning to speech articulators that makes accurate decoding possible from only a small region of cortex, and a detailed articulatory representation of phonemes that persists years after paralysis. These results show a feasible path forward for restoring rapid communication to people with paralysis who can no longer speak.

6
0
https://ai.googleblog.com/2023/08/language-to-rewards-for-robotic-skill.html

In “Language to Rewards for Robotic Skill Synthesis”, we propose an approach to enable users to teach robots novel actions through natural language input. To do so, we leverage reward functions as an interface that bridges the gap between language and low-level robot actions. We posit that reward functions provide an ideal interface for such tasks given their richness in semantics, modularity, and interpretability. They also provide a direct connection to low-level policies through black-box optimization or reinforcement learning (RL). We developed a language-to-reward system that leverages LLMs to translate natural language user instructions into reward-specifying code and then applies MuJoCo MPC to find optimal low-level robot actions that maximize the generated reward function. We demonstrate our language-to-reward system on a variety of robotic control tasks in simulation using a quadruped robot and a dexterous manipulator robot. We further validate our method on a physical robot manipulator.

4
0
https://risk-averse-locomotion.github.io/

**Abstract** The robustness of legged locomotion is crucial for quadrupedal robots in challenging terrains. Recently, Reinforcement Learning (RL) has shown promising results in legged locomotion and various methods try to integrate privileged distillation, scene modeling, and external sensors to improve the generalization and robustness of locomotion policies. However, these methods are hard to handle uncertain scenarios such as abrupt terrain changes or unexpected external forces. In this paper, we consider a novel risk-sensitive perspective to enhance the robustness of legged locomotion. Specifically, we employ a distributional value function learned by quantile regression to model the aleatoric uncertainty of environments, and perform risk-averse policy learning by optimizing the worst-case scenarios via a risk distortion measure. Extensive experiments in both simulation environments and a real Aliengo robot demonstrate that our method is efficient in handling various external disturbances, and the resulting policy exhibits improved robustness in harsh and uncertain situations in legged locomotion.

3
0
arxiv.org

**Abstract** Reinforcement learning from human feedback (RLHF) can improve the quality of large language model's (LLM) outputs by aligning them with human preferences. We propose a simple algorithm for aligning LLMs with human preferences inspired by growing batch reinforcement learning (RL), which we call Reinforced Self-Training (ReST). Given an initial LLM policy, ReST produces a dataset by generating samples from the policy, which are then used to improve the LLM policy using offline RL algorithms. ReST is more efficient than typical online RLHF methods because the training dataset is produced offline, which allows data reuse. While ReST is a general approach applicable to all generative learning settings, we focus on its application to machine translation. Our results show that ReST can substantially improve translation quality, as measured by automated metrics and human evaluation on machine translation benchmarks in a compute and sample-efficient manner.

4
1
https://research.ibm.com/blog/analog-ai-chip-inference

IBM Research has been investigating ways to reinvent the way that AI is computed. Analog in-memory computing, or simply analog AI, is a promising approach to address the challenge by borrowing key features of how neural networks run in biological brains. In our brains, and those of many other animals, the strength of synapses (which are the “weights” in this case) determine communication between neurons. For analog AI systems, we store these synaptic weights locally in the conductance values of nanoscale resistive memory devices such as phase change memory (PCM) and perform multiply-accumulate (MAC) operations, the dominant compute operation in DNNs by exploiting circuit laws and mitigating the need to constantly send data between memory and processor. **Paper** [A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference](https://www.nature.com/articles/s41928-023-01010-1) **Abstract** Analogue in-memory computing (AIMC) with resistive memory devices could reduce the latency and energy consumption of deep neural network inference tasks by directly performing computations within memory. However, to achieve end-to-end improvements in latency and energy consumption, AIMC must be combined with on-chip digital operations and on-chip communication. Here we report a multicore AIMC chip designed and fabricated in 14 nm complementary metal–oxide–semiconductor technology with backend-integrated phase-change memory. The fully integrated chip features 64 AIMC cores interconnected via an on-chip communication network. It also implements the digital activation functions and additional processing involved in individual convolutional layers and long short-term memory units. With this approach, we demonstrate near-software-equivalent inference accuracy with ResNet and long short-term memory networks, while implementing all the computations associated with the weight layers and the activation functions on the chip. For 8-bit input/output matrix–vector multiplications, in the four-phase (high-precision) or one-phase (low-precision) operational read mode, the chip can achieve a maximum throughput of 16.1 or 63.1 tera-operations per second at an energy efficiency of 2.48 or 9.76 tera-operations per second per watt, respectively.

20
2
https://arxiv.org/pdf/2308.01241.pdf

**Abstract** In this work, we present a computing platform named digital twin brain (DTB) that can simulate spiking neuronal networks of the whole human brain scale and more importantly, a personalized biological brain structure. In comparison to most brain simulations with a homogeneous global structure, we highlight that the sparseness, couplingness and heterogeneity in the sMRI, DTI and PET data of the brain has an essential impact on the efficiency of brain simulation, which is proved from the scaling experiments that the DTB of human brain simulation is communication-intensive and memory-access-intensive computing systems rather than computation-intensive. We utilize a number of optimization techniques to balance and integrate the computation loads and communication traffics from the heterogeneous biological structure to the general GPU-based HPC and achieve leading simulation performance for the whole human brain-scaled spiking neuronal networks. On the other hand, the biological structure, equipped with a mesoscopic data assimilation, enables the DTB to investigate brain cognitive function by a reverse-engineering method, which is demonstrated by a digital experiment of visual evaluation on the DTB. Furthermore, we believe that the developing DTB will be a promising powerful platform for a large of research orients including brain-inspired intelligence, rain disease medicine and brain-machine interface.

4
1
arxiv.org

**Abstract** For the first time in the world, we succeeded in synthesizing the room-temperature superconductor (Tc≥400 K, 127∘C) working at ambient pressure with a modified lead-apatite (LK-99) structure. The superconductivity of LK-99 is proved with the Critical temperature (Tc), Zero-resistivity, Critical current (Ic), Critical magnetic field (Hc), and the Meissner effect. The superconductivity of LK-99 originates from minute structural distortion by a slight volume shrinkage (0.48 %), not by external factors such as temperature and pressure. The shrinkage is caused by Cu2+ substitution of Pb2+(2) ions in the insulating network of Pb(2)-phosphate and it generates the stress. It concurrently transfers to Pb(1) of the cylindrical column resulting in distortion of the cylindrical column interface, which creates superconducting quantum wells (SQWs) in the interface. The heat capacity results indicated that the new model is suitable for explaining the superconductivity of LK-99. The unique structure of LK-99 that allows the minute distorted structure to be maintained in the interfaces is the most important factor that LK-99 maintains and exhibits superconductivity at room temperatures and ambient pressure. https://doi.org/10.48550/arXiv.2307.12008

4
4
www.eurekalert.org

In a groundbreaking study, researchers have unlocked a new frontier in the fight against aging and age-related diseases. The study, conducted by a team of scientists at Harvard Medical School, has published the first chemical approach to reprogram cells to a younger state. Previously, this was only achievable using a powerful gene therapy. **Journal Article** [Chemically induced reprogramming to reverse cellular aging](https://www.aging-us.com/article/204896/text) Abstract: A hallmark of eukaryotic aging is a loss of epigenetic information, a process that can be reversed. We have previously shown that the ectopic induction of the Yamanaka factors OCT4, SOX2, and KLF4 (OSK) in mammals can restore youthful DNA methylation patterns, transcript profiles, and tissue function, without erasing cellular identity, a process that requires active DNA demethylation. To screen for molecules that reverse cellular aging and rejuvenate human cells without altering the genome, we developed high-throughput cell-based assays that distinguish young from old and senescent cells, including transcription-based aging clocks and a real-time nucleocytoplasmic compartmentalization (NCC) assay. We identify six chemical cocktails, which, in less than a week and without compromising cellular identity, restore a youthful genome-wide transcript profile and reverse transcriptomic age. Thus, rejuvenation by age reversal can be achieved, not only by genetic, but also chemical means.

3
0
news.mit.edu

To advance our capabilities in protein engineering, MIT CSAIL researchers came up with “FrameDiff,” a computational tool for creating new protein structures beyond what nature has produced. The machine learning approach generates “frames” that align with the inherent properties of protein structures, enabling it to construct novel proteins independently of preexisting designs, facilitating unprecedented protein structures. **Journal Link** [SE(3) diffusion model with application to protein backbone generation](https://arxiv.org/abs/2302.02277) **Github Link** [SE(3) diffusion model with application to protein backbone generation](https://github.com/jasonkyuyim/se3_diffusion)

4
0
innovationorigins.com

Researchers have successfully realized logic gates using DNA crystal engineering, a monumental step forward in DNA computation. Their findings were published in Advanced Materials. Using DNA double crossover-like motifs as building blocks, they constructed complex 3D crystal architectures. The logic gates were implemented in large ensembles of these 3D DNA crystals, and the outputs were visible through the formation of macroscopic crystals. This advancement could pave the way for DNA-based biosensors, offering easy readouts for various applications. The study demonstrates the power of DNA computing, capable of executing massively parallel information processing at a molecular level, while maintaining compatibility with biological systems. **Journal Article** [Implementing Logic Gates by DNA Crystal Engineering](https://onlinelibrary.wiley.com/doi/full/10.1002/adma.202302345) Abstract: DNA self-assembly computation is attractive for its potential to perform massively parallel information processing at the molecular level while at the same time maintaining its natural biocompatibility. It has been extensively studied at the individual molecule level, but not as much as ensembles in 3D. Here, the feasibility of implementing logic gates, the basic computation operations, in large ensembles: macroscopic, engineered 3D DNA crystals is demonstrated. The building blocks are the recently developed DNA double crossover-like (DXL) motifs. They can associate with each other via sticky-end cohesion. Common logic gates are realized by encoding the inputs within the sticky ends of the motifs. The outputs are demonstrated through the formation of macroscopic crystals that can be easily observed. This study points to a new direction of construction of complex 3D crystal architectures and DNA-based biosensors with easy readouts.

6
0
longevity.technology

Back in 1956, Denham Harman proposed that the aging is caused by the build up of oxidative damage to cells, and that this damage is caused by free radicals which have been produced during aerobic respiration [1]. Free radicals are unstable atoms that have an unpaired electron, meaning a free radical is constantly on the look-out for an atom that has an electron it can pinch to fill the space. This makes them highly reactive, and when they steal atoms from your body’s cells, it is very damaging. **Journal Article** [Suppression of superoxide/hydrogen peroxide production at mitochondrial site IQ decreases fat accumulation, improves glucose tolerance and normalizes fasting insulin concentration in mice fed a high-fat diet](https://www.sciencedirect.com/science/article/pii/S0891584923004458?via%3Dihub)

1
0
www.nature.com

**Abstract** A spinal cord injury interrupts the communication between the brain and the region of the spinal cord that produces walking, leading to paralysis. Here, we restored this communication with a digital bridge between the brain and spinal cord that enabled an individual with chronic tetraplegia to stand and walk naturally in community settings. This brain–spine interface (BSI) consists of fully implanted recording and stimulation systems that establish a direct link between cortical signals and the analogue modulation of epidural electrical stimulation targeting the spinal cord regions involved in the production of walking. A highly reliable BSI is calibrated within a few minutes. This reliability has remained stable over one year, including during independent use at home. The participant reports that the BSI enables natural control over the movements of his legs to stand, walk, climb stairs and even traverse complex terrains. Moreover, neurorehabilitation supported by the BSI improved neurological recovery. The participant regained the ability to walk with crutches overground even when the BSI was switched off. This digital bridge establishes a framework to restore natural control of movement after paralysis.

2
0
https://msutoday.msu.edu/news/2023/msu-develops-brain-imaging-system-to-reveal-how-memories-are-made-recorded

“We want to know how memories are made and how they fail to be made in people with memory disorders like Alzheimer’s disease,” said Mark Reimers, an associate professor in the College of Natural Science and Institute for Quantitative Health Sciences and Engineering. “We’d like to investigate and track the evolution of a memory over time and even observe how things get mixed up in everyday memory.” Currently, high-resolution brain imaging techniques can capture only a few hundred individual neurons — the nerve cells that transmit electrical signals throughout the body — at a time. Starting with some initial seed money from the director of IQHSE, Christopher Contag, and MSU’s neuroscience program, Reimers and his co-investigator Christian Burgess at the University of Michigan were able to develop a prototype of the imaging system that has the potential to image 10,000 to 20,000 neurons, giving researchers an unprecedented view of brain activity in real time while it is making and recalling memories. This research has led to a three-year $750,000 grant from the Air Force Office of Scientific Research.

2
0
https://www.science.org/doi/10.1126/sciadv.adg4671

**Abstract** Diffraction-limited optical imaging through scattering media has the potential to transform many applications such as airborne and space-based imaging (through the atmosphere), bioimaging (through skin and human tissue), and fiber-based imaging (through fiber bundles). Existing wavefront shaping methods can image through scattering media and other obscurants by optically correcting wavefront aberrations using high-resolution spatial light modulators—but these methods generally require (i) guidestars, (ii) controlled illumination, (iii) point scanning, and/or (iv) statics scenes and aberrations. We propose neural wavefront shaping (NeuWS), a scanning-free wavefront shaping technique that integrates maximum likelihood estimation, measurement modulation, and neural signal representations to reconstruct diffraction-limited images through strong static and dynamic scattering media without guidestars, sparse targets, controlled illumination, nor specialized image sensors. We experimentally demonstrate guidestar-free, wide field-of-view, high-resolution, diffraction-limited imaging of extended, nonsparse, and static/dynamic scenes captured through static/dynamic aberrations. [Journal Article](https://www.science.org/doi/10.1126/sciadv.adg4671)

2
0

Announcement via [Twitter](https://twitter.com/sdorkenw/status/1674859033076072448) Papers: [Neuronal diagram of an adult (fruit fly) brain](https://www.biorxiv.org/content/10.1101/2023.06.27.546656v1) [A consensus cell type atlas from multiple connectomes reveals principles of circuit stereotypy and variation](https://www.biorxiv.org/content/10.1101/2023.06.27.546055v1) Explore the connectome: https://codex.flywire.ai

1
0
huggingface.co

**Abstract** Genomic (DNA) sequences encode an enormous amount of information for gene regulation and protein synthesis. Similar to natural language models, researchers have proposed foundation models in genomics to learn generalizable features from unlabeled genome data that can then be fine-tuned for downstream tasks such as identifying regulatory elements. Due to the quadratic scaling of attention, previous Transformer-based genomic models have used 512 to 4k tokens as context (<0.001% of the human genome), significantly limiting the modeling of long-range interactions in DNA. In addition, these methods rely on tokenizers to aggregate meaningful DNA units, losing single nucleotide resolution where subtle genetic variations can completely alter protein function via single nucleotide polymorphisms (SNPs). Recently, Hyena, a large language model based on implicit convolutions was shown to match attention in quality while allowing longer context lengths and lower time complexity. Leveraging Hyenas new long-range capabilities, we present HyenaDNA, a genomic foundation model pretrained on the human reference genome with context lengths of up to 1 million tokens at the single nucleotide-level, an up to 500x increase over previous dense attention-based models. HyenaDNA scales sub-quadratically in sequence length (training up to 160x faster than Transformer), uses single nucleotide tokens, and has full global context at each layer. We explore what longer context enables - including the first use of in-context learning in genomics for simple adaptation to novel tasks without updating pretrained model weights. On fine-tuned benchmarks from the Nucleotide Transformer, HyenaDNA reaches state-of-the-art (SotA) on 12 of 17 datasets using a model with orders of magnitude less parameters and pretraining data. On the GenomicBenchmarks, HyenaDNA surpasses SotA on all 8 datasets on average by +9 accuracy points. [Huggingface link](https://huggingface.co/papers/2306.15794) [ArXiv Paper](https://arxiv.org/abs//2306.15794)

2
0
www.deepmind.com

New foundation agent learns to operate different robotic arms, solves tasks from as few as 100 demonstrations, and improves from self-generated data. Robots are quickly becoming part of our everyday lives, but they’re often only programmed to perform specific tasks well. While harnessing recent advances in AI could lead to robots that could help in many more ways, progress in building general-purpose robots is slower in part because of the time needed to collect real-world training data. Our latest paper introduces a self-improving AI agent for robotics, RoboCat, that learns to perform a variety of tasks across different arms, and then self-generates new training data to improve its technique. Previous research has explored how to develop robots that can learn to multi-task at scale and combine the understanding of language models with the real-world capabilities of a helper robot. RoboCat is the first agent to solve and adapt to multiple tasks and do so across different, real robots. RoboCat learns much faster than other state-of-the-art models. It can pick up a new task with as few as 100 demonstrations because it draws from a large and diverse dataset. This capability will help accelerate robotics research, as it reduces the need for human-supervised training, and is an important step towards creating a general-purpose robot

1
0
www.statnews.com

The rat kidney was peculiarly beautiful — an edgeless viscera about the size of a quarter, gemstone-like and gleaming as if encased in pure glass. It owed its veneer to a frosty descent in liquid nitrogen vapor to minus 150-degrees Celsius, a process known as vitrification, that shocked the kidney into an icy state of suspended animation. Then researchers at the University of Minnesota restarted the kidney’s biological clock, rewarming it before transplanting it back into a live rat — who survived the ordeal. In all, five rats received a vitrified-then-thawed kidney in a study whose results were published this month in Nature Communications. It’s the first time scientists have shown it’s possible to successfully and repeatedly transplant a life-sustaining mammalian organ after it has been rewarmed from this icy metabolic arrest. Outside experts unequivocally called the results a seminal milestone for the field of organ preservation. **Journal Article:** [Vitrification and nanowarming enable long-term organ cryopreservation and life-sustaining kidney transplantation in a rat model](https://www.nature.com/articles/s41467-023-38824-8)

1
0
newscenter.lbl.gov

To accelerate development of useful new materials, researchers are building a new kind of automated lab that uses robots guided by artificial intelligence. “Our vision is using AI to discover the materials of the future,” said Yan Zeng, a staff scientist leading the A-Lab at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab). The “A” in A-Lab is deliberately ambiguous, standing for artificial intelligence (AI), automated, accelerated, and abstracted, among others. Scientists have computationally predicted hundreds of thousands of novel materials that could be promising for new technologies – but testing to see whether any of those materials can be made in reality is a slow process. Enter A-Lab, which can process 50 to 100 times as many samples as a human every day and use AI to quickly pursue promising finds. A-Lab could help identify and fast-track materials for several research areas, such as solar cells, fuel cells, thermoelectrics (materials that generate energy from temperature differences), and other clean energy technologies. To start, researchers will focus on finding new materials for batteries and energy storage, addressing critical needs for an affordable, equitable, and sustainable energy supply.

1
0
www.nature.com

For most of the history of life on Earth, genetic information has been carried in a code that specifies just 20 amino acids. Amino acids are the building blocks of proteins, which do most of the heavy lifting in the cell; their side-chains govern protein folding, interactions and chemical activities. By limiting the available side chains, nature effectively restricts the kinds of reaction that proteins can perform. As a doctoral student in the 1980s, Peter Schultz found himself wondering why nature had restricted itself in this way — and set about trying to circumvent this limitation. Several years later, as a professor at the University of California, Berkeley, Schultz and his team managed to do so by tinkering with the machinery of protein synthesis. Although confined to a test tube, the work marked a key early success in efforts to hack the genetic code. Since then, many researchers have followed in Schultz’s footsteps, tweaking the cellular apparatus for building proteins both to alter existing macromolecules and to create polymers from entirely new building blocks. The resulting molecules can be used in research and for the development of therapeutics and materials. But it’s been a hard slog, because protein synthesis is a crucial cellular function that cannot easily be changed.

1
0
https://www.pnas.org/doi/10.1073/pnas.2218617120

**Significance** We demonstrate the highest-resolution MR images ever obtained of the mouse brain. The diffusion tensor images (DTI) @ 15 μm spatial resolution are 1,000 times the resolution of most preclinical rodent DTI/MRI. Superresolution track density images are 27,000 times that of typical preclinical DTI/MRI. High angular resolution yielded the most detailed MR connectivity maps ever generated. High-performance computing pipelines merged the DTI with light sheet microscopy of the same specimen, providing a comprehensive picture of cells and circuits. The methods have been used to demonstrate how strain differences result in differential changes in connectivity with age. We believe the methods will have broad applicability in the study of neurodegenerative diseases. **Abstract** We have developed workflows to align 3D magnetic resonance histology (MRH) of the mouse brain with light sheet microscopy (LSM) and 3D delineations of the same specimen. We start with MRH of the brain in the skull with gradient echo and diffusion tensor imaging (DTI) at 15 μm isotropic resolution which is ~ 1,000 times higher than that of most preclinical MRI. Connectomes are generated with superresolution tract density images of ~5 μm. Brains are cleared, stained for selected proteins, and imaged by LSM at 1.8 μm/pixel. LSM data are registered into the reference MRH space with labels derived from the ABA common coordinate framework. The result is a high-dimensional integrated volume with registration (HiDiver) with alignment precision better than 50 µm. Throughput is sufficiently high that HiDiver is being used in quantitative studies of the impact of gene variants and aging on mouse brain cytoarchitecture and connectomics.

1
0
https://www.digitaljournal.com/pr/news/xherald/worldwide-demand-for-autonomous-tractors-is-projected-to-rise-at-a-cagr-of-24-by-2033

In 2023, the market for autonomous tractors is expected to be worth US$1.5 billion. The total market value is predicted to increase at a phenomenal CAGR (Compound Annual Growth Rate) of 24% from 2023 to 2033, reaching US$ 13 billion.

1
0
www.tomshardware.com

A team of researchers with the New York State University (NYU) has done the seemingly impossible: they've successfully designed a semiconductor chip with no hardware definition language. Using only plain English - and the definitions and examples within it that can define and describe a semiconductor processor - the team showcased what human ingenuity, curiosity, and baseline knowledge can do when aided by the AI prowess of ChatGPT. While surprising, it goes further: the chip wasn't only designed. It was manufactured; it was benchmarked, and it worked. The two hardware engineers' usage of plain English showcases just how valuable and powerful ChatGPT can be (as if we still had doubts, following the number of awe-inspiring things it's done already). **Journal Link** [Chip-Chat: Challenges and Opportunities in Conversational Hardware Design](https://arxiv.org/abs/2305.13243)

1
0
www.newswise.com

Natural DNA is often double-stranded: one strand to encode the genes and one backup strand, intertwined in a double helix. The double helix is stabilized by Watson-Crick interactions, which allow the two strands to recognize and pair with one another. Yet there exists another, lesser-known class of interactions between DNA. These so-called normal or reverse Hoogsteen interactions allow a third strand to join in, forming a beautiful triple helix. In a recent paper, published in Advanced Materials, researchers from the Gothelf lab debut a general method to organize double-stranded DNA, based on Hoogsteen interactions. The study unambiguously demonstrates that triplex-forming strands are capable of sharply bending or “folding” double-stranded DNA to create compacted structures. The appearance of these structures range from hollow two-dimensional shapes to dense 3D constructs and everything in-between, including a structure resembling a potted flower. Gothelf and co-workers have named their method triplex origami. **Journal Article:** [Folding Double-Stranded DNA into Designed Shapes with Triplex-Forming Oligonucleotides](https://onlinelibrary.wiley.com/doi/10.1002/adma.202302497)

1
0
https://www.sciencedirect.com/science/article/abs/pii/S1359835X22002366

**Abstract** Graphene has recently gained significant interest owing to its advantageous physicochemical and biological properties. However, its preparation strategies, main properties, chemical derivatives, and advanced applications in the multidimensional fields of lubrication, electricity, and tissue engineering are rarely reported. Hence, this review presents comprehensive discussions on current states of graphene as effective reinforcements to apply into these fields. First, graphene preparation methods are analyzed, and its main properties and chemical derivatives are discussed. Then, the friction-reduction and antiwear mechanisms of graphene are summarized. Next, the advanced applications of graphene in electricity and tissue engineering are described. Finally, the review is concluded by presenting outlooks on key challenges and future opportunities for extending preparation methods and multidimensional applications of the graphene-based materials.

1
0
https://onlinelibrary.wiley.com/doi/abs/10.1002/adma.202106506

**Abstract** Advances in nanoscience have enabled the synthesis of nanomaterials, such as graphene, from low-value or waste materials through flash Joule heating. Though this capability is promising, the complex and entangled variables that govern nanocrystal formation in the Joule heating process remain poorly understood. In this work, machine learning (ML) models are constructed to explore the factors that drive the transformation of amorphous carbon into graphene nanocrystals during flash Joule heating. An XGBoost regression model of crystallinity achieves an r2 score of 0.8051 ± 0.054. Feature importance assays and decision trees extracted from these models reveal key considerations in the selection of starting materials and the role of stochastic current fluctuations in flash Joule heating synthesis. Furthermore, partial dependence analyses demonstrate the importance of charge and current density as predictors of crystallinity, implying a progression from reaction-limited to diffusion-limited kinetics as flash Joule heating parameters change. Finally, a practical application of the ML models is shown by using Bayesian meta-learning algorithms to automatically improve bulk crystallinity over many Joule heating reactions. These results illustrate the power of ML as a tool to analyze complex nanomanufacturing processes and enable the synthesis of 2D crystals with desirable properties by flash Joule heating.

1
0
https://voyager.minedojo.org/

**Abstract** We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention. Voyager consists of three key components: 1) an automatic curriculum that maximizes exploration, 2) an ever-growing skill library of executable code for storing and retrieving complex behaviors, and 3) a new iterative prompting mechanism that incorporates environment feedback, execution errors, and self-verification for program improvement. Voyager interacts with GPT-4 via blackbox queries, which bypasses the need for model parameter fine-tuning. The skills developed by Voyager are temporally extended, interpretable, and compositional, which compounds the agent's abilities rapidly and alleviates catastrophic forgetting. Empirically, Voyager shows strong in-context lifelong learning capability and exhibits exceptional proficiency in playing Minecraft. It obtains 3.3x more unique items, travels 2.3x longer distances, and unlocks key tech tree milestones up to 15.3x faster than prior SOTA. Voyager is able to utilize the learned skill library in a new Minecraft world to solve novel tasks from scratch, while other techniques struggle to generalize.

1
0
www.youtube.com

You can take a look at the technology demoed in this video at [Kahnmigo](https://www.khanacademy.org/khan-labs)!

1
0
https://ieeexplore.ieee.org/document/8342278

**Abstract:** CMOS technology and its continuous scaling have made electronics and computers accessible and affordable for almost everyone on the globe; in addition, they have enabled the solutions of a wide range of societal problems and applications. Today, however, both the technology and the computer architectures are facing severe challenges/walls making them incapable of providing the demanded computing power with tight constraints. This motivates the need for the exploration of novel architectures based on new device technologies; not only to sustain the financial benefit of technology scaling, but also to develop solutions for extremely demanding emerging applications. This paper presents two computation-in-memory based accelerators making use of emerging memristive devices; they are Memristive Vector Processor and RRAM Automata Processor. The preliminary results of these two accelerators show significant improvement in terms of latency, energy and area as compared to today's architectures and design.

1
0
www.marktechpost.com

In this paper authors from UCSB and Microsoft Research propose the LONGMEM framework, which enables language models to cache long-form prior context or knowledge into the non-differentiable memory bank and take advantage of them via a decoupled memory module to address the memory staleness problem. They create a revolutionary residual side network (SideNet) to achieve decoupled memory. A frozen backbone LLM is used to extract the paired attention keys and values from the previous context into the memory bank. The resulting attention query of the current input is utilized in the SideNet’s memory-augmented layer to access cached (keys and values) for earlier contexts. The associated memory augmentations are then fused into learning hidden states via a joint attention process. **Paper:** [Augmenting Language Models with Long-Term Memory](https://arxiv.org/abs/2306.07174)

1
0
wyss.harvard.edu

Progress in drug testing and regenerative medicine could greatly benefit from laboratory-engineered human tissues built of a variety of cell types with precise 3D architecture. But production of greater than millimeter sized human tissues has been limited by a lack of methods for building tissues with embedded life-sustaining vascular networks.

2
0
https://www.biorxiv.org/content/10.1101/2021.12.15.472839v2.full.pdf

Context is an important part of understanding the meaning of natural language, but most neuroimaging studies of meaning use isolated words and isolated sentences with little context. In this study, we examined whether the results of neuroimaging language studies that use out-of-context stimuli generalize to natural language. We find that increasing context improves the quality of neuroimaging data and changes where and how semantic information is represented in the brain. These results suggest that findings from studies using out-of-context stimuli may not generalize to natural language used in daily life

1
0