AI is teaching cars to make better decisions, so could it do the same for surgeons? Addressing that question is the mission of Theator, a startup based in Palo Alto, Calif., with an R&D site in Tel Aviv, that’s striving to fuel the nascent revolution in autonomous surgery. Theator co-founder and Chief Technology Officer Dotan Read article >
If I redo all my python code in tf C++ API, Cython, and if necessary C++ would it actually be any faster if the most time consuming part is training the models? Does TF’s python API already execute the code in a similar way to how it would with the C++ API?
I’m trying to build a english to assamese transliteration model. I tried character level NMT with attention, but not satisfied with the results, considering the Assamese language consists of prefix/suffixes. Currently exploring WFST. Has anybody worked on something similar?
I’ve been trying to install tensorflow on my computer in a venv. when I do pip list I am met with a list of modules. One of which is tensorflow 2.4.1 meaning that it should have install correctly(?).
However, when I do python3 and import tensorflow, it results in an error saying tensorflow.python doesn’t exist. Any ideas?
I have a project that I need help with. This project includes detecting object in a video down to an accuracy of a few pixels (stable background). If anyone one has any expertise please message me. I would love to get some help from this community. Thank you all 🙏🏻
I am trying to figure out how to generate images using vector graphics instead of raster images like normal. I can not find any resources that seem to be handling a similar goal.
I have built a system that follows the Pix2Pix tutorial, but there is not a nice way to create derivatives. I have tried a brute force method (subtract before image from after image divided by parameters) and a more clever method using triangle areas, but the images never stop looking like random messes.
I tried using TensorFlow agents to do RI learning, but once again just end up with random messes.
Is there maybe a paper or resource out there that I am missing because I do not know the right search terms?
To improve brain simulation technology, a team of researchers from the University of Sussex developed a GPU-accelerated approach that can generate brain simulation models of almost-unlimited size.
To improve brain simulation technology, a team of researchers from the University of Sussex developed a GPU-accelerated approach that can generate brain simulation models of almost-unlimited size.
Using a GPU-accelerated system composed of an NVIDIA TITAN RTX GPU, the team created a cutting-edge model of a Macaque’s visual cortex with 4.13 x 106 neurons and 24.2 x 109 synaptic weights, a simulation that could previously only be done on a supercomputer.
The neural network-based simulator uses the large amount of computational power of the GPU to procedurally generate connectivity and synaptic weights as spikes are triggered, without having to store connectivity data in memory, the researchers explained.
“Large-scale simulations of spiking neural network models are an important tool for improving our understanding of the dynamics and ultimately the function of brains. However, even small mammals such as mice have on the order of 1 × 1012 synaptic connections meaning that simulations require several terabytes of data – an unrealistic memory requirement for a single desktop machine,” the researchers explained.
Dr James Knight and Prof Thomas Nowotny of the University of Sussex School of Engineering and Informatics.
According to the team, the initialization of the model took six minutes, and the simulation of each biological second took 7.7 min in the ground state, and 8.4 min in the resting state – 35% less time than a previous supercomputer simulation.
Results of full-scale multi-area model simulation in ground and resting states
“This research is a game-changer for computational Neuroscience and AI researchers who can now simulate brain circuits on their local workstations, but it also allows people outside academia to turn their gaming PC into a supercomputer and run large neural networks.”