Machine Learning

  • Author: Vários
  • Narrator: Vários
  • Publisher: Podcast
  • Duration: 281:18:19
  • More information

Informações:

Synopsis

Machine learning is the most important technological breakthrough in the 21st century. Listen to my views on the future of machine learning

Episodes

  • Text character encoder and decoder

    14/09/2021 Duration: 04min

    Thought vector --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Recap of thoughts

    11/09/2021 Duration: 42min

    Weekly thoughts --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • 3d printed homes

    09/09/2021 Duration: 06min

    Thoughts --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Thoughts on codex for python

    02/09/2021 Duration: 08min

    User experience writing code with the machine --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Lstm memory mechanism

    30/08/2021 Duration: 07min

    Forget, memory, hide or transfer cell gates of lstm --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Neuromorphic computers spike architecture

    27/08/2021 Duration: 12min

    spiking networks the brain has very low power density and very low frequency. deep learning uses in-memory computing. spike-timing dependent plasticity timing between the spikes is how learning is taking place in the synapse. The synapse is connected to the pre-synaptic neuron and a post-synaptic neuron. The Input spikes from the pre-synaptic neuron and the post synaptic neuron affect the weights on the synapse. The delay between the spikes determines who the synapse learns. The synapse will depress from negative current with voltage flowing from pre neuron to the post neuron. if the pre to post voltage is positive you have excitation of the synapse. feedforward network creates a perceptron. multiple synapse connect to a post neuron. the perceptron is self learning. the artificial neurons learn like the biological neurons. the artificial neuron can memorize objects. recurrent networks represent feedback systems. in the recurrent network all neurons talk with each other. the recurrent network has an inhibitor

  • Quantum dot magnetic based computer

    26/08/2021 Duration: 13min

    Quantum scientist have also shown that an array of Single electron Transistor - SETs create a form of neural network." SETs construct computers that use individual electrons to carry information. SET biggest problem is operating at room temperature. Quantum tunneling means the can "interact capacitively rather than by current flow throught the wires." "When their interactions result from the quantum tunneling of electrons, quantum dots can collectively behave as a form of quantum cellular automaton, QCA. QCA computers may show associative memory. If Decoherence can be avoid a qbit can form with a 0 or 1 or superposition state of both at the same time. 5 qbits could handle 32 states (2^n), simulateously; a conventional computer would handle 32 sets of 5 bits, or 160 bits in all. 64 bit encryption could be processed with one 64 qubit operation, whereas, a conventional computer requiring 2^64, 1.84 x10^19 operations or 292.5 years, 18 billion billion times more powerful than a 64 bit binary compute --- Send i

  • Adiabatic Quantum Computing

    24/08/2021 Duration: 07min

    In Adiabatic QC, you evolve the system under a time-dependent Hamiltonian. Two constraints are your initial ground state and the Hamiltonian changes very slowly in time, so for all times your quantum state remains close to the ground --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Ai startup 12

    23/08/2021 Duration: 06min

    bay labs focuses on bringing ai to healthcare by studying medical imaging using deep learning. baylabs wants to increase the quality, value and access to medical imaging. medical image is used to detect health defects. baylabs asks how can we bring more imaging to more people. The us spends $100 billion annually on medical image. baylabs focuses on ultrasound for medical imaging. ultrasound is very affordable and imaging can be very good. Today, ultrasound devices are a hand held probe and a tablet for display.  What does it mean to intrepret ultrasound images? experts recognize certain biological structures and certain defect features. They then build a story of what is going on in the image. In 2013, machines began recognizing object at a glance better than humans it neurons. if neural networks are so good at recognizing object outside the body then apply them to objects within the body. The baylabs object recognition can be put on the same device gather ultrasound images. bay labs used technology built in

  • Recap thoughts

    21/08/2021 Duration: 40min

    Thoughts --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Luis serrano talks about quantum computing

    20/08/2021 Duration: 34min

    Quantum computing --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Codex

    20/08/2021 Duration: 22min

    Nlp generative computer code --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Recap thoughts

    16/08/2021 Duration: 10min

    Weekly thoughts --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Text Transformers will change everything

    14/08/2021 Duration: 42min

    Recap of thoughts

  • Ai startup 11

    12/08/2021 Duration: 07min

    veritone wants to change the world, to build something never built before. veritone wants to make an impact. veritone analyzed video and detected criminal behavior or criminals and help police capture them. veritone built a platform that processes three to four million hours of content every quarter. veritone team members ideas are welcome. may the best idea win. veritone help catalog giants video content making it searchable and accessible by the fans. veritone offers text to speech and it can analyze voice for pattern variance. veritone ai to analyze energy grid performance. veritone has built an ai operating system. the path to ai is collaberation. human in the loop provides the cognitive processing. veritone can learn customer voices. the call is transcribed in text and then ai analyzes the text. veritones operating system platform ai domain is : vision, speech, data, biometrics, audio, and text ingestion and analysis. --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Ai startup 10

    09/08/2021 Duration: 15min

    aibrain strives for human like intelligence. Cognitive reason for humans makes us want to learn about the world. People remember states or memories about the world. increased computing power of mobile phones have allow consumers to enjoy ai like siri. ai has been steadily evolving rather than revolutionary. ai has an enormous difficulty to understand human language. aibrain wants to deliver ai intelligent characters. ai will take care of mundane tasks. human beings will work on imagative work. cars and airplanes are tools used by humans likewise ai will be tool to augment human intelligence. https://lnkd.in/g2-mZrh --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Cerebras cs-1

    05/08/2021 Duration: 04min

    cerebras creates a chip with over a trillion transistors. cerebras can run deep learning algorithms faster than other infrastructure. cerebras has 400,000 cores in a single chip.  The main chip is called cs-1. cs-1 has redundant cores for failover controlled by software. cs-1 has 18 gigabytes of memory and 9 pentabytes of memory bandwidth. Each processor has its own memory. cs1 has high speed communications between the cores. --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Ai startup 9

    05/08/2021 Duration: 08min

    deepset works to get more meaningful search results. Deepset uses transfer learning, language models, and question and answer to drive search results. Making sense of text data. deepset is an open source company. It uses natural language processing to answer questions using bert. MiniLM is a small version of BERT. deepset uses extractive questioning and answering where the answer must be part of the text by one span. This is an energy problem. There is a large set of potential predictions. SQuAD has become the default dataset. BERT is a feedforward network predicting the start and end token of the extractive answer then maps to the word dictionary to produce and answer. Industry wants question and answer for its data. There is a growing number of knowledge workers spending hours and hours reading text. Knowledge workers are use to web search and enterprise search. Scaling involves gathering a large number of documents. The road map include generative question and answering and sythentic reasoning. --- Send

  • Ai startup 8

    02/08/2021 Duration: 02min

    soul machine is trying to humanize ai called sam. sam simulates an ai brain reacting to noise, talking, and information. sam has facial expressions. sam can interact back with the user. Sam can remember the conversation. Sam uses the web cam too see the user. The facial gestures makes sam seem more human. Ai will help people interact better with things.   https://lnkd.in/gfYQiJf --- Send in a voice message: https://anchor.fm/david-nishimoto/message

  • Blink frequency

    31/07/2021 Duration: 01min

    Emteq - facial emotion recognition www.emteq.net. Emteq software and hardware measure muscle tensions in the face then send those signals to a deep learning network that receives the inputs and outputs facial expressions in an avatar.  eye tracking and blinking is also tracked. https://lnkd.in/g7X7Kht --- Send in a voice message: https://anchor.fm/david-nishimoto/message

page 31 from 53