Categories
Offsites

ToTTo: A Controlled Table-to-Text Generation Dataset

In the last few years, research in natural language generation, used for tasks like text summarization, has made tremendous progress. Yet, despite achieving high levels of fluency, neural systems can still be prone to hallucination (i.e.generating text that is understandable, but not faithful to the source), which can prohibit these systems from being used in many applications that require high degrees of accuracy. Consider an example from the Wikibio dataset, where the neural baseline model tasked with summarizing a Wikipedia infobox entry for Belgian football player Constant Vanden Stock summarizes incorrectly that he is an American figure skater.

While the process of assessing the faithfulness of generated text to the source content can be challenging, it is often easier when the source content is structured (e.g., in tabular format). Moreover, structured data can also test a model’s ability for reasoning and numerical inference. However, existing large scale structured datasets are often noisy (i.e., the reference sentence cannot be fully inferred from the tabular data), making them unreliable for the measurement of hallucination in model development.

In “ToTTo: A Controlled Table-To-Text Generation Dataset”, we present an open domain table-to-text generation dataset created using a novel annotation process (via sentence revision) along with a controlled text generation task that can be used to assess model hallucination. ToTTo (shorthand for “Table-To-Text”) consists of 121,000 training examples, along with 7,500 examples each for development and test. Due to the accuracy of annotations, this dataset is suitable as a challenging benchmark for research in high precision text generation. The dataset and code are open-sourced on our GitHub repo.

Table-to-Text Generation
ToTTo introduces a controlled generation task in which a given Wikipedia table with a set of selected cells is used as the source material for the task of producing a single sentence description that summarizes the cell contents in the context of the table. The example below demonstrates some of the many challenges posed by the task, such as numerical reasoning, a large open-domain vocabulary, and varied table structure.

Example in the ToTTo dataset, where given the source table and set of highlighted cells (left), the goal is to generate a one sentence description, such as the “target sentence” (right). Note that generating the target sentence would require numerical inference (eleven NFL seasons) and understanding of the NFL domain.

Annotation Process
Designing an annotation process to obtain natural but also clean target sentences from tabular data is a significant challenge. Many datasets like Wikibio and RotoWire pair naturally occurring text heuristically with tables, a noisy process that makes it difficult to disentangle whether hallucination is primarily caused by data noise or model shortcomings. On the other hand, one can elicit annotators to write sentence targets from scratch, which are faithful to the table, but the resulting targets often lack variety in terms of structure and style.

In contrast, ToTTo is constructed using a novel data annotation strategy in which annotators revise existing Wikipedia sentences in stages. This results in target sentences that are clean, as well as natural, containing interesting and varied linguistic properties. The data collection and annotation process begins by collecting tables from Wikipedia, where a given table is paired with a summary sentence collected from the supporting page context according to heuristics, such as word overlap between the page text and the table and hyperlinks referencing tabular data. This summary sentence may contain information not supported by the table and may contain pronouns with antecedents found in the table only, not the sentence itself.

The annotator then highlights the cells in the table that support the sentence and deletes phrases in the sentence that are not supported by the table. They also decontextualize the sentence so that it is standalone (e.g., with correct pronoun resolution) and correct grammar, where necessary.

We show that annotators obtain high agreement on the above task: 0.856 Fleiss Kappa for cell highlighting, and 67.0 BLEU for the final target sentence.

Dataset Analysis
We conducted a topic analysis on the ToTTo dataset over 44 categories and found that the Sports and Countries topics, each of which consists of a range of fine-grained topics, e.g., football/olympics for sports and population/buildings for countries, together comprise 56.4% of the dataset. The other 44% is composed of a much more broad set of topics, including Performing Arts, Transportation, and Entertainment.

Furthermore, we conducted a manual analysis of the different types of linguistic phenomena in the dataset over 100 randomly chosen examples. The table below summarizes the fraction of examples that require reference to the page and section titles, as well as some of the linguistic phenomena in the dataset that potentially pose new challenges to current systems.

Linguistic Phenomena Percentage
Require reference to page title 82%
Require reference to section title 19%
Require reference to table description 3%
Reasoning (logical, numerical, temporal etc.) 21%
Comparison across rows/columns/cells 13%
Require background information 12%

Baseline Results
We present some baseline results of three state-of-the-art models from the literature (BERT-to-BERT, Pointer Generator, and the Puduppully 2019 model) on two evaluation metrics, BLEU and PARENT. In addition to reporting the score on the overall test set, we also evaluate each model on a more challenging subset consisting of out-of-domain examples. As the table below shows, the BERT-to-BERT model performs best in terms of both BLEU and PARENT. Moreover, all models achieve considerably lower performance on the challenge set indicating the challenge of out-of-domain generalization.

  BLEU PARENT BLEU PARENT
Model (overall) (overall) (challenge) (challenge)
BERT-to-BERT 43.9 52.6 34.8 46.7
Pointer Generator 41.6 51.6 32.2 45.2
Puduppully et al. 2019 19.2 29.2 13.9 25.8

While automatic metrics can give some indication of performance, they are not currently sufficient for evaluating hallucination in text generation systems. To better understand hallucination, we manually evaluate the top performing baseline, to determine how faithful it is to the content in the source table, under the assumption that discrepancies indicate hallucination. To compute the “Expert” performance, for each example in our multi-reference test set, we held out one reference and asked annotators to compare it with the other references for faithfulness. As the results show, the top performing baseline appears to hallucinate information ~20% of the time.

  Faithfulness Faithfulness
Model (overall) (challenge)
Expert 93.6 91.4
BERT-to-BERT  76.2 74.2

Model Errors and Challenges
In the table below, we present a selection of the observed model errors to highlight some of the more challenging aspects of the ToTTo dataset. We find that state-of-the-art models struggle with hallucination, numerical reasoning, and rare topics, even when using cleaned references (errors in red). The last example shows that even when the model output is correct it is sometimes not as informative as the original reference which contains more reasoning about the table (shown in blue).

Reference Model Prediction
in the 1939 currie cup, western province lost to transvaal by 17–6 in cape town. the first currie cup was played in 1939 in transvaal1 at new- lands, with western province winning 17–6.
a second generation of micro- drive was announced by ibm in 2000 with increased capacities at 512 mb and 1 gb. there were 512 microdrive models in 2000: 1 gigabyte.
the 1956 grand prix motorcy- cle racing season consisted of six grand prix races in five classes: 500cc, 350cc, 250cc, 125cc and sidecars 500cc. the 1956 grand prix motorcycle racing season consisted of eight grand prix races in five classes: 500cc, 350cc, 250cc, 125cc and sidecars 500cc.
in travis kelce’s last collegiate season, he set personal career highs in receptions (45), re- ceiving yards (722), yards per receptions (16.0) and receiving touchdowns (8). travis kelce finished the 2012 season with 45 receptions for 722 yards (16.0 avg.) and eight touchdowns.

Conclusion
In this work, we presented ToTTo, a large, English table-to-text dataset that presents both a controlled generation task and a data annotation process based on iterative sentence revision. We also provided several state-of-the-art baselines, and demonstrated ToTTo could be a useful dataset for modeling research as well as for developing evaluation metrics that can better detect model improvements.

In addition to the proposed task, we hope our dataset can also be helpful for other tasks such as table understanding and sentence revision. ToTTo is available at our GitHub repo.

Acknowledgements
The authors wish to thank Ming-Wei Chang, Jonathan H. Clark, Kenton Lee, and Jennimaria Palomaki for their insightful discussions and support. Many thanks also to Ashwin Kakarla and his team for help with the annotations.

Categories
Misc

Electric Avenue: NVIDIA Engineer Revs Up Classic Car to Sport AI

Arman Toorians isn’t your average classic car restoration hobbyist. The NVIDIA engineer recently transformed a 1974 Triumph TR6 roadster at his home workshop into an EV featuring AI. Toorians built the vehicle to show a classic car can be recycled into an electric ride that taps NVIDIA Jetson AI for safety, security and vehicle management Read article >

The post Electric Avenue: NVIDIA Engineer Revs Up Classic Car to Sport AI appeared first on The Official NVIDIA Blog.

Categories
Misc

How XSplit Delivers Rich Content for Live Streaming with NVIDIA Broadcast

In this interview, Miguel Molina, Director of Developer Relations at SplitmediaLabs, the makers of XSplit, discussed how they were able to easily integrate NVIDIA Broadcast into their vastly popular streaming service.

In this interview, Miguel Molina, Director of Developer Relations at SplitmediaLabs, the makers of XSplit, discussed how they were able to easily integrate NVIDIA Broadcast into their vastly popular streaming service. 

For those who may not know, tell us about yourself?

My name is Miguel Molina, currently the Director of Developer Relations at SplitmediaLabs, the makers of XSplit. I’ve been with the company since before its inception, starting out as a software engineer, moving onto product management, and finally landing in business development where I work with our industry partners to find integrations and opportunities that bring value to our customers.

Tell us about Xsplit and the success of the company thus far.

XSplit is the brand that got us to where we are now and XSplit Broadcaster is the hero product behind it all. It’s a simple yet powerful live streaming and recording software for producing and delivering rich video content that powers countless live streams and recordings around the world.




What excited you most about NVIDIA Broadcast Engine?

Being able to add value to our products is a priority for us and the NVIDIA Broadcast Engine gives us just that in a straightforward package. With features that improve video, audio, and augmented reality, the SDK has the potential to massively improve the output of different types of media, vastly improving the user experience for various use cases.

Why were you interested in integrating the Audio Effects SDK?

We were looking for an alternative to CPU-based background noise removal and NVIDIA’s demo videos showing off NVIDIA’s noise removal feature got us sold on the idea. After receiving  a sample, we decided to commit to integrating it into XSplit Broadcaster.

How was the experience integrating the SDK?

It was as simple as looking at the sample code, putting the relevant code segments in their proper places, and hitting compile. The initial integration itself just took a few hours and a working build was available the same day we started on it.

Any surprises or unexpected challenges?

We were initially having massive CUDA utilization in an early alpha build of the SDK but NVIDIA engineers were very responsive and quickly isolated the issue on their end and were able to provide an updated build that fixed the problem. 

How have your users responded to the improved experience?

Our users love the fact that they are able to utilize NVIDIA’s noise removal natively within XSplit Broadcaster. It’s as simple as turning it on and it just works.

What new features or SDKs from NVIDIA are you looking forward to now?

We are looking to update our NVIDIA Video Codec SDK implementation so we can provide better granular preset control over quality versus performance on NVENC.

Which of the NBX SDKs are you most interested in beyond Audio?

Definitely the Video Effects SDK as their Virtual Background and Super Resolution features would be quite useful with people mostly staying at home these days.

+++

Developers can download XSplit Broadcaster here.

To learn more about NVIDIA Broadcast, or to get started, visit our page here.

Categories
Misc

How do I identify matching objects in a pair of stereo images?


How do I identify matching objects in a pair of stereo images?


Left and Right images

So, for instance, I have a pair of stereo images (as an example,
here I have duplicated the photo to represent left and right
images) of certain objects (in this case dogs and cats). I want to
match the dogs in the 2 images, i.e the network should identify
that if there’s a ‘Dog 1’ in the left image, then which dog in the
right image is the corresponding match for ‘Dog 1’. And similarly
for other objects as well.

I can perform instance segmentation on the images and get the
object boundaries and the masks for both left and right images, but
how do match the objects in the stereo image pair?

I was thinking of using Siamese Networks to get a similarity
score, but pretty clueless on how to proceed with that.

Any help would be great! TIA!

submitted by /u/chinmaygrg

[visit reddit]

[comments]

Categories
Misc

Amid CES, NVIDIA Packs Flying, Driving, Gaming Tech News into a Single Week

Flying, driving, gaming, racing… amid the first-ever virtual Consumer Electronics Show this week, NVIDIA-powered technologies spilled out in all directions. In automotive, Chinese automakers SAIC and NIO announced they’ll use NVIDIA DRIVE in future vehicles. In gaming, NVIDIA on Tuesday led off a slew of gaming announcements by revealing the affordable new RTX 3060 GPU Read article >

The post Amid CES, NVIDIA Packs Flying, Driving, Gaming Tech News into a Single Week appeared first on The Official NVIDIA Blog.

Categories
Misc

I published a step-by-step tutorial on how to save autoencoders with Python/Keras

I published a tutorial where I explain how to save an
AutoEncoder with Python + Keras. In particular, in this video
you’ll learn how to save/load the Autoencoder class parameters
with pickle and the model weights with methods native to the Keras
API.

This video is part of a series called “Generating Sound with
Neural Networks”. In this series, you’ll learn how to generate
sound from audio files and spectrograms 🎧 🎧 using Variational
Autoencoders 🤖 🤖

Here’s the video:


https://www.youtube.com/watch?v=UIC0Irq-Eok&list=PL-wATfeyAMNpEyENTc-tVH5tfLGKtSWPp&index=7

submitted by /u/diabulusInMusica

[visit reddit]

[comments]

Categories
Misc

How do I visualize data from my Chat Bot?

I made a chatbot using TensorFlow, from Tech With Tim’s tutorial. I
changed it for a discord bot and flask. But for my project I want
to somehow show ANY DATA, but in visual form, graphs, pie charts,
bars. I don’t know how to use TensorBoard to visualize my chatbot
data.

This is my code:
https://github.com/hootloot/Tensorflow-Question/blob/main/main.py

Thank you

submitted by /u/chopchopstiicks

[visit reddit]

[comments]

Categories
Offsites

Recognizing Pose Similarity in Images and Videos

Everyday actions, such as jogging, reading a book, pouring water, or playing sports, can be viewed as a sequence of poses, consisting of the position and orientation of a person’s body. An understanding of poses from images and videos is a crucial step for enabling a range of applications, including augmented reality display, full-body gesture control, and physical exercise quantification. However, a 3-dimensional pose captured in two dimensions in images and videos appears different depending on the viewpoint of the camera. The ability to recognize similarity in 3D pose using only 2D information will help vision systems better understand the world.

In “View-Invariant Probabilistic Embedding for Human Pose” (Pr-VIPE), a spotlight paper at ECCV 2020, we present a new algorithm for human pose perception that recognizes similarity in human body poses across different camera views by mapping 2D body pose keypoints to a view-invariant embedding space. This ability enables tasks, such as pose retrieval, action recognition, action video synchronization, and more. Compared to existing models that directly map 2D pose keypoints to 3D pose keypoints, the Pr-VIPE embedding space is (1) view-invariant, (2) probabilistic in order to capture 2D input ambiguity, and (3) does not require camera parameters during training or inference. Trained with in-lab setting data, the model works on in-the-wild images out of the box, given a reasonably good 2D pose estimator (e.g., PersonLab, BlazePose, among others). The model is simple, results in compact embeddings, and can be trained (in ~1 day) using 15 CPUs. We have released the code on our GitHub repo.

Pr-VIPE can be directly applied to align videos from different views.

Pr-VIPE
The input to Pr-VIPE is a set of 2D keypoints, from any 2D pose estimator that produces a minimum of 13 body keypoints, and the output is the mean and variance of the pose embedding. The distances between embeddings of 2D poses correlate to their similarities in absolute 3D pose space. Our approach is based on two observations:

  • The same 3D pose may appear very different in 2D as the viewpoint changes.
  • The same 2D pose can be projected from different 3D poses.

The first observation motivates the need for view-invariance. To accomplish this, we define the matching probability, i.e., the likelihood that different 2D poses were projected from the same, or similar 3D poses. The matching probability predicted by Pr-VIPE for matching pose pairs should be higher than for non-matching pairs.

To address the second observation, Pr-VIPE utilizes a probabilistic embedding formulation. Because many 3D poses can project to the same or similar 2D poses, the model input exhibits an inherent ambiguity that is difficult to capture through deterministic mapping point-to-point in embedding space. Therefore, we map a 2D pose through a probabilistic mapping to an embedding distribution, of which we use the variance to represent the uncertainty of the input 2D pose. As an example, in the figure below the third 2D view of the 3D pose on the left is similar to the first 2D view of a different 3D pose on the right, so we map them into a similar location in the embedding space with large variances.

Pr-VIPE enables vision systems to recognize 2D poses across views. We embed 2D poses using Pr-VIPE such that the embeddings are (1) view-invariant (2D projections of similar 3D poses are embedded close together) and (2) probabilistic. By embedding detected 2D poses, Pr-VIPE enables direct retrieval of pose images from different views, and can also be applied to action recognition and video alignment.

View-Invariance
During training, we use 2D poses from two sources: multi-view images and projections of groundtruth 3D poses. Triplets of 2D poses (anchor, positive, and negative) are selected from a batch, where the anchor and positive are two different projections of the same 3D pose, and the negative is a projection of a non-matching 3D pose. Pr-VIPE then estimates the matching probability of 2D pose pairs from their embeddings.
During training, we push the matching probability of positive pairs to be close to 1 with a positive pairwise loss in which we minimize the embedding distance between positive pairs, and the matching probability of negative pairs to be small by maximizing the ratio of the matching probabilities between positive and negative pairs with a triplet ratio loss.

Overview of the Pr-VIPE model. During training, we apply three losses (triplet ratio loss, positive pairwise loss, and a prior loss that applies a unit Gaussian prior to our embeddings). During inference, the model maps an input 2D pose to a probabilistic, view-invariant embedding.

Probabilistic Embedding
Pr-VIPE maps a 2D pose to a probabilistic embedding as a multivariate Gaussian distribution using a sampling-based approach for similarity score computation between two distributions. During training, we use a Gaussian prior loss to regularize the predicted distribution.

Evaluation
We propose a new cross-view pose retrieval benchmark to evaluate the view-invariance property of the embedding. Given a monocular pose image, cross-view retrieval aims to retrieve the same pose from different views without using camera parameters. The results demonstrate that Pr-VIPE retrieves poses more accurately across views compared to baseline methods in both evaluated datasets (Human3.6M, MPI-INF-3DHP).

Pr-VIPE retrieves poses across different views more accurately relative to the baseline method (3D pose estimation).

Common 3D pose estimation methods (such as the simple baseline used for comparison above, SemGCN, and EpipolarPose, amongst many others), predict 3D poses in camera coordinates, which are not directly view-invariant. Thus, rigid alignment between every query-index pair is required for retrieval using estimated 3D poses, which is computationally expensive due to the need for singular value decomposition (SVD). In contrast, Pr-VIPE embeddings can be directly used for distance computation in Euclidean space, without any post-processing.

Applications
View-invariant pose embedding can be applied to many image and video related tasks. Below, we show Pr-VIPE applied to cross-view retrieval on in-the-wild images without using camera parameters.


We can retrieve in-the-wild images from different views without using camera parameters by embedding the detected 2D pose using Pr-VIPE. Using the query image (top row), we search for a matching pose from a different camera view and we show the nearest neighbor retrieval (bottom row). This enables us to search for matching poses across camera views more easily.

The same Pr-VIPE model can also be used for video alignment. To do so, we stack Pr-VIPE embeddings within a small time window, and use the dynamic time warping (DTW) algorithm to align video pairs.

Manual video alignment is difficult and time-consuming. Here, Pr-VIPE is applied to automatically align videos of the same action repeated from different views.

The video alignment distance calculated via DTW can then be used for action recognition by classifying videos using nearest neighbor search. We evaluate the Pr-VIPE embedding using the Penn Action dataset and demonstrate that using the Pr-VIPE embedding without fine-tuning on the target dataset, yields highly competitive recognition accuracy. In addition, we show that Pr-VIPE even achieves relatively accurate results using only videos from a single view in the index set.

Pr-VIPE recognizes action across views using pose inputs only, and is comparable to or better than methods using pose only or with additional context information (such as Iqbal et al., Liu and Yuan, Luvizon et al., and Du et al.). When action labels are only available for videos from a single view, Pr-VIPE (1-view only) can still achieve relatively accurate results.

Conclusion
We introduce the Pr-VIPE model for mapping 2D human poses to a view-invariant probabilistic embedding space, and show that the learned embeddings can be directly used for pose retrieval, action recognition, and video alignment. Our cross-view retrieval benchmark can be used to test the view-invariant property of other embeddings. We look forward to hearing about what you can do with pose embeddings!

Acknowledgments
Special thanks to Jiaping Zhao, Liang-Chieh Chen, Long Zhao (Rutgers University), Liangzhe Yuan, Yuxiao Wang, Florian Schroff, Hartwig Adam, and the Mobile Vision team for the wonderful collaboration and support.

Categories
Misc

IM AI: China Automaker SAIC Unveils EV Brand Powered by NVIDIA DRIVE Orin

There’s a new brand of automotive intelligence equipped with the brains — and the battery — to go the distance. SAIC, the largest automaker in China, joined forces with etail giant Alibaba to unveil a new premium EV brand, dubbed IM, or “intelligence in motion.” The long-range electric vehicles will feature AI capabilities powered by Read article >

The post IM AI: China Automaker SAIC Unveils EV Brand Powered by NVIDIA DRIVE Orin appeared first on The Official NVIDIA Blog.

Categories
Misc

Glassdoor Ranks NVIDIA No. 2 in Latest Best Places to Work List

NVIDIA is the second-best place to work in the U.S. according to a ranking released today by Glassdoor. The site’s Best Places to Work in 2021 list rates the 100 best U.S. companies with more than 1,000 employees, based on how their own employees rate career opportunities, company culture and senior management. The survey’s top Read article >

The post Glassdoor Ranks NVIDIA No. 2 in Latest Best Places to Work List appeared first on The Official NVIDIA Blog.