Much of 2020 may look best in the rearview mirror, but the year also held many moments of outstanding work, gems worth hitting the rewind button to see again. So, here’s a countdown — roughly in order of ascending popularity — of 10 favorite NVIDIA videos that hit YouTube in 2020. With two exceptions for Read article >
The post Sparkles in the Rough: NVIDIA’s Video Gems from a Hardscrabble 2020 appeared first on The Official NVIDIA Blog.
a practiced eye for react.js and tensorflow
Does anyone have any insight about the contents of this
stackoverflow post:
https://stackoverflow.com/questions/65402617/tensorflow-automl-model-in-react
Getting a little desperate.
submitted by /u/eagletongue
[visit reddit]
[comments]
Object detection is a computer vision task that has recently
been influenced by all of the progress made in ML.
Now with tools like TensorFlow Object Detection API, you can
create reliable models quickly and fairly easily.
If you’re unfamiliar, TensorFlow Object Detection API: –
supports TensorFlow 2, – lets you employ state of the art model
architectures for object detection, – gives you a simple way to
configure models.
Tutorial shows everything from installation and setup, all the
way to model training.
submitted by /u/kk_ai
[visit reddit]
[comments]
Transfer learning using a small dataset
I’m building an image classifier. I happen to have a small
dataset of ideal data. Can I train a model using this idealised
data, and somehow use it as a base for further training?
I’ve read through the docs; they all use ImageNet or
tensorflow-hub datasets. I can’t seem to find an example of using
your own data.
submitted by /u/BananaCharmer
[visit reddit]
[comments]
Say you’re classifying the flowers dataset. Some images aren’t
as good as others. Would duplicating the images that are good
examples of a certain type help propagate the desired features in
the network?
E.g. if I duplicate a close-up of a certain type flower head
within the dataset (say a rose within /roses), would it make the
network more bias towards the duplicates?
I have a handful of ideal examples, and thousands of very
variable examples. I’m unsure what’s the best strategy to be more
biased towards the good examples in my data..
submitted by /u/BananaCharmer
[visit reddit]
[comments]
I am currently studying Industrial Engineering, but I’ve been
learning DL and ML on my own. My goal is to obtain a job related
with ML or AI. I don’t come from a traditional Computer Science
background, do you think this certificate will add value to my
CV?
submitted by /u/man_you_trust
[visit reddit]
[comments]
Low light image enhancement TFJS
I am delighted to share the TensorFlow JS variants for the The Project repo – https://github.com/Rishit-dagli/MIRNet-TFJS Please consider giving it a star if you like it. More details in submitted by /u/Rishit-dagli |
How to use VGG16 in Kaggle inference ?
submitted by /u/maifee
[visit reddit]
[comments]
How to use Tokenizer with punctuation?
Hey, this is a question I had that I answered myself after some
research. I can’t find a flair more applicable than ‘Question’ so I
will just answer it myself haha.
I was trying to use tf.keras.preprocessing.text.Tokenizer to
train a model for a language task. I wanted my model to include
certain punctuation in it’s output, like exclamation points and
commas and whatnot, and I wasn’t sure how to do this.
I figured that since the default value for filters in the
Tokenizer constructor is:
filters='!"#$%&()*+,-./:;<=>?@[\]^_`{|}~tn'
then I would just have to remove the punctuation that I want to
be recognized. I then spent a few hours training my model.
DON’T DO THIS. It will not treat the
punctuation as separate tokens, but rather your vocabulary will be
filled with examples such as ‘man’ vs ‘man.’ vs ‘man,’, etc. These
will all be separate tokens.
Instead, you should preprocess all of your sentences to include
spaces between any punctuation that you want. This is how I did
it:
def separate_punctuation(s, filters=',.()?'): new_s = '' for char in s: if char in filters: new_s += ' ' + char + ' ' else: new_s += char return new_s.strip()
This way ‘Hello neighbor, how are you?’ will become ‘Hello
neighbor , how are you ?’. Thus, all punctuation will only take up
one element of your vocabulary and your model will generalize much,
much better.
Hope this saves someone else’s time.
submitted by /u/LivingPornFree
[visit reddit]
[comments]