Categories
Misc

Reading the tutorials — When to use two `GradientTape`?

I am reading the advanced tutorials of TF 2.4, and I am confused
about the need to use two instances of GradientTape. This is the
case in the Pix2Pix
and
Deep Convolutional GAN
examples, while the
CycleGAN
example uses a singe, persistent GradientTape.

It seems to me that the first approach makes both GradientTapes
record the operations of both networks, which sounds wasteful.
Intuitively, the second approach makes way more sense to me, should
use half as much memory as the first.

When should one use the first and the second approaches?

submitted by /u/rmk236

[visit reddit]

[comments]

Categories
Misc

installing tensorflow gpu is making me want to cry

why google

submitted by /u/cereal_final

[visit reddit]

[comments]

Categories
Misc

Tensorflow tutorial on neural machine translation code understanding difficulty

In tensorflow tutorial on neural machine translation this

In loss_function () function they have masked loss on padded
tokken, but my question is won’t crossenteopy function itself
cancel out padded token loss term so why do masking

submitted by /u/AI_Astronaut9852

[visit reddit]

[comments]

Categories
Misc

Can someone explain to me what is error " AttributeError: ‘NoneType’ object has no attribute ‘endswith’ " trying to say?

My code is

def get_checkpoint_every_epoch(): checkpoint_every_epoch = 'model_checkpoints_every_epoch' checkpoints = ModelCheckpoint(filepath=checkpoint_every_epoch, frequency= 'epoch', save_weights_only=True, verbose=1) return checkpoints def get_checkpoint_best_only(): checkpoint_best_path = 'model_checkpoints_best_only/checkpoint' checkpoint_best= ModelCheckpoint(filepath=checkpoint_best_path, save_weights_only= True, monitor= 'val_accuracy', save_best_only= True, verbose=1) return checkpoint_best def get_early_stopping(): early_stopping= tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=3) return early_stopping checkpoint_every_epoch = get_checkpoint_every_epoch() checkpoint_best_only = get_checkpoint_best_only() early_stopping = get_early_stopping() 

Followed by this,

def get_model_last_epoch(model): model_last_epoch_file = tf.train.latest_checkpoint("checkpoints_every_epoch") model.load_weights(model_last_epoch_file) return model def get_model_best_epoch(model): model_best_epoch_file = tf.train.latest_checkpoint("checkpoints_best_only") model.load_weights(model_best_epoch_file) return model model_last_epoch = get_model_last_epoch(get_new_model(x_train[0].shape)) model_best_epoch = get_model_best_epoch(get_new_model(x_train[0].shape)) print('Model with last epoch weights:') get_test_accuracy(model_last_epoch, x_test, y_test) print('') print('Model with best epoch weights:') get_test_accuracy(model_best_epoch, x_test, y_test) 

This is where, I get this error:

AttributeError Traceback (most recent call last) <ipython-input-18-b6d169507ca4> in <module> 3 # Verify that the second has a higher validation (testing) accuarcy. 4 ----> 5 model_last_epoch = get_model_last_epoch(get_new_model(x_train[0].shape)) 6 model_best_epoch = get_model_best_epoch(get_new_model(x_train[0].shape)) 7 print('Model with last epoch weights:') <ipython-input-17-4c8cba016afe> in get_model_last_epoch(model) 12 model_last_epoch_file = tf.train.latest_checkpoint("checkpoints_every_epoch") 13 ---> 14 model.load_weights(model_last_epoch_file) 15 16 return model /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in load_weights(self, filepath, by_name) 179 raise ValueError('Load weights is not yet supported with TPUStrategy ' 180 'with steps_per_run greater than 1.') --> 181 return super(Model, self).load_weights(filepath, by_name) 182 183 @trackable.no_automatic_dependency_tracking /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py in load_weights(self, filepath, by_name) 1137 format. 1138 """ -> 1139 if _is_hdf5_filepath(filepath): 1140 save_format = 'h5' 1141 else: /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py in _is_hdf5_filepath(filepath) 1447 1448 def _is_hdf5_filepath(filepath): -> 1449 return (filepath.endswith('.h5') or filepath.endswith('.keras') or 1450 filepath.endswith('.hdf5')) 1451 AttributeError: 'NoneType' object has no attribute 'endswith' 

What does it mean? Sorry, I’m just a newbie, need some
enlightenment. I wish you a merry christmas. Thanks a lot!

submitted by /u/edmondoh001

[visit reddit]

[comments]

Categories
Misc

How Come I Get Different Results From a TF Tutorial on my Machine?

So I am following a YouTube series here: https://www.youtube.com/watch?v=CA0PQS1Rj_4

And this person also posted their code on GitHub:
https://github.com/musikalkemist/Deep-Learning-Audio-Application-From-Design-to-Deployment/tree/master/4-%20Making%20Predictions%20with%20the%20Speech%20Recognition%20System

(I removed the model.h5 and data.json since I wanted to use a
model generated on my own PC)

I run the train.py which trains the model and get this as a
result: https://pastebin.com/mZSXK25v

When i test the “down.wav”, it predicts “right”: https://pastebin.com/Up8EvNyc

When I tested “left.wav”, it predicts “down”: https://pastebin.com/vzWzTV4X

How come I get different results, in fact completely wrong
results no matter what I test, despite getting a 0.9358
accuracy?

submitted by /u/TuckleBuck88

[visit reddit]

[comments]

Categories
Misc

How to overcome the wrong dimension issue as in, the dimension of the shape is different from expected

I was doing the assignment for the “Model validation on the Iris
dataset”.

I get this error: “Error when checking input: expected
dense_input to have shape (135,) but got array with shape (4,)”.
How do I overcome this problem?

I posted this question at
https://stackoverflow.com/questions/65441320/how-to-overcome-the-wrong-dimension-issue
.

I wish someone can highlight my error. Pardon me for asking this
question. I’m still a newbie. Making basic errors here and there.
But anyway, I’d like to wish everyone a happy Christmas Day
ahead.

submitted by /u/edmondoh001

[visit reddit]

[comments]

Categories
Misc

How do I rectify the attribute Early Stopping error?


How do I rectify the attribute Early Stopping error?
submitted by /u/edmondoh001

[visit reddit]

[comments]
Categories
Misc

Could any one please help me to understand what are the ‘protos’ in TF Object Detection?

Hi there,

I am a beginner and struggling a bit to understand what are the
‘protos’ in TF Object Detection?

Why do we need them here?

Also, while setting up the TF API we need to download and
compile protocol buffers.

There is also a ‘protos’ folder when one downloads the object
detection module – could anyone please explain me what are those
and what is the relationship between them?

This would be of immense help!

Thanks!

submitted by /u/Mandala16180

[visit reddit]

[comments]

Categories
Misc

Tensorflow 2 not respecting thread settings

I am running a tensorflow application that sets inter_op,
intra_op and OMP_NUM_THREADS, however, it completely ignores these
settings and seems to run with the defaults. Here’s how I’m setting
them:

 import tensorflow as tf print('Using Thread Parallelism: {} NUM_INTRA_THREADS, {} NUM_INTER_THREADS, {} OMP_NUM_THREADS'.format(os.environ['NUM_INTRA_THREADS'], os.environ['NUM_INTER_THREADS'], os.environ['OMP_NUM_THREADS'])) session_conf = tf.compat.v1.ConfigProto(inter_op_parallelism_threads=int(os.environ['NUM_INTER_THREADS']), intra_op_parallelism_threads=int(os.environ['NUM_INTRA_THREADS'])) sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf) tf.compat.v1.keras.backend.set_session(sess) I have validated that it's reading the right values (the print prints the values as expected). I have also tried with other Tensorflow 2 versions with no success. I am at a loss as to what I'm doing wrong. Version Info: tensorflow 2.2.0 py37_2 intel tensorflow-base 2.2.0 0 intel tensorflow-estimator 2.2.0 pyh208ff02_0 keras 2.4.3 0 keras-base 2.4.3 py_0 keras-preprocessing 1.1.0 py_1 

submitted by /u/dunn_ditty

[visit reddit]

[comments]

Categories
Misc

Hey, Mr. DJ: Super Hi-Fi’s AI Applies Smarts to Sound

Brendon Cassidy, CTO and chief scientist at Super Hi-Fi, uses AI to give everyone the experience of a radio station tailored to their unique tastes. Super Hi-Fi, an AI startup and member of the NVIDIA Inception program, develops technology that produces smooth transitions, intersperses content meaningfully and adjusts volume and crossfade. Started three years ago, Read article >

The post Hey, Mr. DJ: Super Hi-Fi’s AI Applies Smarts to Sound appeared first on The Official NVIDIA Blog.