tensorrt Questions

4

Solved

I know that this question has been asked a lot, but none of the suggestions seem to work, probably since my setup is somewhat different: Ubuntu 22.04 python 3.10.8 tensorflow 2.11.0 cudatoolkit 11....
Becker asked 29/12, 2022 at 21:18

3

I have an inference code in TensorRT(with python). I want to run this code in ROS but I get the below error when trying to allocate buffer: LogicError: explicit_context_dependent failed: invalid d...
Vendor asked 24/2, 2020 at 9:19

3

I was trying to install tensorRT 7.0 in ubuntu 18.4 (nv-tensorrt-repo-ubuntu1804-cuda10.2-trt7.0.0.11-ga-20191216_1-1_amd64.deb) debian. Followed the documentation https://docs.nvidia.com/deeplear...
Conchoidal asked 28/5, 2020 at 8:53

2

I am trying to install tensorrt in conda env and I have the cudatoolkit and cudnn installed in my env through conda navigator. I have also updated my pip and setuptool but the get the below error w...
Backstay asked 13/9, 2022 at 12:33

2

Solved

What I have: A trained recurrent neural network in Tensorflow. What I want: A mobile application that can run this network as fast as possible (inference mode only, no training). I believe ther...
Filar asked 9/3, 2018 at 12:37

3

Solved

Our current flow: Conversation of tf2->onnx->TensorRT (all16 and 32 and 8 bits options) Is there an existing tool like https://github.com/lutzroeder/netron (or any other way) to see the outpu...
Propagandist asked 13/1, 2021 at 16:13

5

I installed TensorRT on my VM using the Debian Installation. If I run "dpkg -l | grep TensorRT" I get the expected result: ii graphsurgeon-tf 5.0.2-1+cuda10.0 amd64 GraphSurgeon for TensorRT packa...
Bethannbethanne asked 7/4, 2019 at 10:24

2

I used Nvidia's Transfer Learning Toolkit(TLT) to train and then used the tlt-converter to convert the .etlt model into an .engine file. I want to use this .engine file for inference in python. B...
Viglione asked 11/12, 2019 at 7:29

1

I am trying to feed a very large image into Triton server. I need to divide the input image into patches and feed the patches one by one into a tensorflow model. The image has a variable size, so t...
Loveliesbleeding asked 26/4, 2021 at 11:7

2

Solved

I am trying to speed up the segmentation model(unet-mobilenet-512x512). I converted my tensorflow model to tensorRT with FP16 precision mode. And the speed is lower than I expected. Before the opti...
Silvertongued asked 7/2, 2021 at 12:20

2

I am currently working with Darknet on Yolov4, with 1 class. I need to export those weights to onnx format, for tensorRT inference. I've tried multiple technics, using ultralytics to convert or goi...
Chewink asked 1/7, 2020 at 8:50

0

I found that we can optimize the Tensorflow model in several ways. If I am mistaken, please tell me. 1- Using TF-TRT, This API developer by tensorflow and integreted TensoRT to Tensorflow an...
Kellerman asked 17/1, 2020 at 10:42

3

I am trying to deploy a trained U-Net with TensorRT. The model was trained using Keras (with Tensorflow as backend). The code is very similar to this one: https://github.com/zhixuhao/unet/blob/mast...
Nameplate asked 17/7, 2019 at 22:57

1

Solved

I just wanted to download TensorRT but I saw there are two different versions GA and RC. What is the differences between them and what version should I choose as I use a Windows 10 machine with Gef...
Text asked 1/8, 2019 at 15:15

1

I have two GPUs. My program uses TensorRT and Tensorflow. When I run only TensorRT part, it is fine. When I run together with Tensorflow part, I have error as [TensorRT] ERROR: engine.cpp (370) -...
Boer asked 6/6, 2019 at 6:26

0

We have a Caffe model that contains: layer { name: "foo" type: "PriorBox" prior_box_param { # ERROR HERE # whatever } # etc } Now, following the code in sampleMNIST I try to import my mode...
Tenedos asked 19/9, 2018 at 3:59

1

Solved

I'm working with the new tf.data.Dataset API and I can't seem to figure out how to perform inference. Ultimately, I want to convert my model to a TensorRT graph and run it on the TX2, and all of th...
Saari asked 13/9, 2018 at 18:27

2

I would like to use NVIDIA TensorRT to run my Tensorflow models. Currenly, TensorRT supports Caffe prototxt network descriptor files. I was not able to find source code to convert Tensorflow model...
Castorina asked 14/12, 2016 at 12:9
1

© 2022 - 2024 — McMap. All rights reserved.