Commit 49f9f49c authored by sikhin.vc's avatar sikhin.vc

Merge branch 'develop' into 'master'

Develop

See merge request !1
parents 3a653a91 a08d8233
Subproject commit f70dcc966d3a7db5389425d725f056a9a3899b84
[submodule "3rdparty/pybind11"]
path = 3rdparty/pybind11
url = https://github.com/pybind/pybind11.git
[submodule "3rdparty/gst-python"]
path = 3rdparty/gst-python
url = https://github.com/GStreamer/gst-python.git
# Frequently Asked Questions and Troubleshooting Guide
* [Git clone fails due to Gstreamer repo access error](#faq9)
* [Application fails to work with mp4 stream](#faq0)
* [Ctrl-C does not stop the app during engine file generation](#faq1)
* [Application fails to create gst elements](#faq2)
* [GStreamer debugging](#faq3)
* [Application stuck with no playback](#faq4)
* [Error on setting string field](#faq5)
* [Pipeline unable to perform at real time](#faq6)
* [Triton container problems with multi-GPU setup](#faq7)
* [ModuleNotFoundError: No module named 'pyds'](#faq8)
<a name="faq9"></a>
### Git clone fails due to Gstreamer repo access error
If the following error is encountered while cloning:
```
fatal: unable to access 'https://gitlab.freedesktop.org/gstreamer/common.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
fatal: clone of 'https://gitlab.freedesktop.org/gstreamer/common.git' into submodule path '/opt/nvidia/deepstream/deepstream/sources/ds_python/3rdparty/gst-python/common' failed
```
Temporarily disable git SSL verification by:
```
export GIT_SSL_NO_VERIFY=true
```
before cloning the repo. Unset the variable after cloning.
<a name="faq0"></a>
### Application fails to work with mp4 stream
Deepstream-test1,2 and 4 apps only work with H264 elementary streams such as sample_720p.h264 provided with the DeepStream SDK. These include test apps 1, 2 and 4. Attempting to use other such types of streams will result in the following error:
```
Error: gst-stream-error-quark: Internal data stream error. (1): gstbaseparse.c(3611): gst_base_parse_loop (): /GstPipeline:pipeline0/GstH264Parse:h264-parser:
streaming stopped, reason not-negotiated (-4)
```
Deepstream-test3 supports various stream types including mp4 and RTSP. Please see the app README for usage.
<a name="faq1"></a>
### Ctrl-C does not stop the app during engine file generation
This is a limitation of Python signal handling:
https://docs.python.org/3/library/signal.html
"A long-running calculation implemented purely in C (such as regular expression matching on a large body of text) may run uninterrupted for an arbitrary amount of time, regardless of any signals received. The Python signal handlers will be called when the calculation finishes."
To work around this:
1. Use ctrl-z to bg the process
2. Optionally run "jobs" if there are potentially multiple bg processes:
$ jobs
[1]- Stopped python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.h264 (wd: /opt/nvidia/deepstream/deepstream-4.0/sources/apps/python/deepstream-test1)
[2]+ Stopped python3 deepstream_test_2.py /opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.h264
3. Kill the bg job:
$ kill %<job number, 1 or 2 from above. e.g. kill %1>
<a name="faq2"></a>
### Application fails to create gst elements
As with DeepStream SDK, if the application runs into errors, cannot create gst elements, try again after removing gstreamer cache:
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
<a name="faq3"></a>
### GStreamer debugging
General GStreamer debugging: set debug level on the command line:
```
GST_DEBUG=<level> python3 <app> [options]
```
<a name="faq4"></a>
### Application stuck with no playback
The application appears to be stuck without any playback activity.
One possible cause is incompatible input type. Some of the sample apps only support H264 elementary streams.
<a name="faq5"></a>
### Error on setting string field
```
terminate called after throwing an instance of 'pybind11::error_already_set'
what(): TypeError: (): incompatible function arguments. The following argument types are supported:
1. (arg0: pyds.<struct, e.g. NvDsEventMsgMeta>, arg1: str) -> None
Invoked with: <pyds.<struct, e.g. NvDsEventMsgMeta> object at 0x7ffa93d2fce0>, <int, e.g. 140710457366960>
```
This can happen if the string field is being set with another string field, e.g.:
```
dstmeta.sensorStr = srcmeta.sensorStr
```
Remember from the String Access section above that reading ```srcmeta.sensorStr``` directly returns an int, not string.
The proper way to copy the string field in this case is:
```
dstmeta.sensorStr = pyds.get_string(srcmeta.sensorStr)
```
<a name="faq6"></a>
### Pipeline unable to perform at real time
WARNING: A lot of buffers are being dropped. (13): gstbasesink.c(2902):
gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Answer:
This could be thrown from any GStreamer app when the pipeline through-put is low
resulting in late buffers at the sink plugin.
This could be a hardware capability limitation or expensive software calls
that hinder pipeline buffer flow.
With python, there is a possibility for such an induced delay in software.
This is with regards to the probe() callbacks an user could leverage
for metadata extraction and other use-cases as demonstrated in our test-apps.
Please NOTE:
a) probe() callbacks are synchronous and thus holds the buffer
(info.get_buffer()) from traversing the pipeline until user return.
b) loops inside probe() callback could be costly in python.
<a name="faq7"></a>
### Triton container problems with multi-GPU setup
The Triton Inference Server plugin currently only supports single-GPU usage.
When running the docker, please specify
`--gpus device=<GPU ID>`
e.g.: `--gpus device=0`
<a name="faq8"></a>
### ModuleNotFoundError: No module named 'pyds'
The pyds wheel installs the pyds.so library where all the pip packages are stored. Make sure the wheel installation succeeds and the pyds.so is present in the correct path (which should be /home/<user>/.local/lib/python<python-version>/site-packages/pyds.so)
Command to install the pyds wheel is:
```bash
$ pip3 install ./pyds-1.1.0-py3-none*.whl
```
This diff is collapsed.
SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# DeepStream Python Apps
This repository contains Python bindings and sample applications for the [DeepStream SDK](https://developer.nvidia.com/deepstream-sdk).
SDK version supported: 6.1.1
<b>The bindings sources along with build instructions are now available under [bindings](bindings)! </b>
<b>This release comes with Operating System upgrades (from Ubuntu 18.04 to Ubuntu 20.04) for DeepStreamSDK 6.1.1 support. This translates to upgrade in Python version to 3.8 and [gst-python](3rdparty/gst-python/) version has also been upgraded to 1.16.2 !</b>
Download the latest release package complete with bindings and sample applications from the [release section](../../releases).
Please report any issues or bugs on the [DeepStream SDK Forums](https://devtalk.nvidia.com/default/board/209). This enables the DeepStream community to find help at a central location.
- [DeepStream Python Apps](#deepstream-python-apps)
- [Python Bindings](#python-bindings)
- [Sample Applications](#sample-applications)
<a name="metadata_bindings"></a>
## Python Bindings
DeepStream pipelines can be constructed using Gst Python, the GStreamer framework's Python bindings. For accessing DeepStream MetaData,
Python [bindings](bindings) are provided as part of this repository. This module is generated using [Pybind11](https://github.com/pybind/pybind11).
<p align="center">
<img src=".python-app-pipeline.png" alt="bindings pipeline" height="600px"/>
</p>
These bindings support a Python interface to the MetaData structures and functions. Usage of this interface is documented in the [HOW-TO Guide](HOWTO.md) and demonstrated in the sample applications.
<a name="sample_applications"></a>
## Sample Applications
Sample applications provided here demonstrate how to work with DeepStream pipelines using Python.
The sample applications require [MetaData Bindings](#metadata_bindings) to work.
To run the sample applications or write your own, please consult the [HOW-TO Guide](HOWTO.md)
<p align="center">
<img src=".test3-app.png" alt="deepstream python app screenshot" height="400px"/>
</p>
We currently provide the following sample applications:
* [deepstream-test1](apps/deepstream-test1) -- 4-class object detection pipeline
* [deepstream-test2](apps/deepstream-test2) -- 4-class object detection, tracking and attribute classification pipeline
* [deepstream-test3](apps/deepstream-test3) -- multi-stream pipeline performing 4-class object detection - now also supports triton inference server, no-display mode, file-loop and silent mode
* [deepstream-test4](apps/deepstream-test4) -- msgbroker for sending analytics results to the cloud
* [deepstream-imagedata-multistream](apps/deepstream-imagedata-multistream) -- multi-stream pipeline with access to image buffers
* [deepstream-ssd-parser](apps/deepstream-ssd-parser) -- SSD model inference via Triton server with output parsing in Python
* [deepstream-test1-usbcam](apps/deepstream-test1-usbcam) -- deepstream-test1 pipelien with USB camera input
* [deepstream-test1-rtsp-out](apps/deepstream-test1-rtsp-out) -- deepstream-test1 pipeline with RTSP output
* [deepstream-opticalflow](apps/deepstream-opticalflow) -- optical flow and visualization pipeline with flow vectors returned in NumPy array
* [deepstream-segmentation](apps/deepstream-segmentation) -- segmentation and visualization pipeline with segmentation mask returned in NumPy array
* [deepstream-nvdsanalytics](apps/deepstream-nvdsanalytics) -- multistream pipeline with analytics plugin
* [runtime_source_add_delete](apps/runtime_source_add_delete) -- add/delete source streams at runtime
* [deepstream-imagedata-multistream-redaction](apps/deepstream-imagedata-multistream-redaction) -- multi-stream pipeline with face detection and redaction
* [deepstream-rtsp-in-rtsp-out](apps/deepstream-rtsp-in-rtsp-out) -- multi-stream pipeline with RTSP input/output
* [deepstream-preprocess-test](apps/deepstream-preprocess-test) -- multi-stream pipeline using nvdspreprocess plugin with custom ROIs
* <b>NEW</b> [deepstream-demux-multi-in-multi-out](apps/deepstream-demux-multi-in-multi-out) -- multi-stream pipeline using nvstreamdemux plugin to generated separate buffer outputs
Detailed application information is provided in each application's subdirectory under [apps](apps).
This diff is collapsed.
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
================================================================================
DeepStream SDK Python Bindings
================================================================================
Setup pre-requisites:
- Ubuntu 20.04
- NVIDIA DeepStream SDK 6.1.1
- Python 3.8
- Gst-python
--------------------------------------------------------------------------------
Package Contents
--------------------------------------------------------------------------------
1. DeepStream Python bindings located in bindings dir
with installation instructions in bindings/README.md
2. DeepStream test apps in Python
The following test apps are available:
deepstream-test1
deepstream-test2
deepstream-test3
deepstream-test4
deepstream-imagedata-multistream
deepstream-imagedata-multistream-redaction
deepstream-ssd-parser
deepstream-test1-rtsp-out
deepstream-rtsp-in-rtsp-out
deepstream-test1-usbcam
deepstream-opticalflow
deepstream-segmentation
deepstream-nvdsanalytics
deepstream-preprocess-test
runtime_source_add_delete
--------------------------------------------------------------------------------
Installing Pre-requisites:
--------------------------------------------------------------------------------
DeepStream SDK 6.1.1
--------------------
Download and install from https://developer.nvidia.com/deepstream-download
Python 3.8
----------
Should be already installed with Ubuntu 20.04
Gst-python
----------
Should be already installed on Jetson
If missing, install with the following steps:
$ sudo apt update
$ sudo apt install python3-gi python3-dev python3-gst-1.0 -y
--------------------------------------------------------------------------------
Running the samples
--------------------------------------------------------------------------------
The apps are configured to work from inside the DeepStream SDK 6.1.1 installation.
Clone the deepstream_python_apps repo under <DeepStream ROOT>/sources:
$ git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
This will create the following directory:
<DeepStream ROOT>/sources/deepstream_python_apps
Follow README in each app's directory to run the app.
Example: running test1 app:
$ cd deepstream-test1
$ python3 deepstream_test_1.py <input .h264 file>
--------------------------------------------------------------------------------
Running the samples inside DeepStream SDK docker
--------------------------------------------------------------------------------
The general steps are:
1. Pull the DeepStream SDK docker of choice following the latest DeepStream
Release Notes at https://developer.nvidia.com/deepstream-sdk for more info.
Note that the deepstream-ssd-parser app requires the Triton docker on x86_64.
2. Run the docker with Python Bindings mapped using the following option:
-v <path to this python bindings directory>:/opt/nvidia/deepstream/deepstream/sources/python
3. Inside the container, install packages required by all samples:
$ sudo apt update
$ sudo apt install python3-gi python3-dev python3-gst-1.0 -y
4. Optionally install additional dependencies required by specific samples.
See README in app's directory for such requirements.
$ sudo apt install python3-opencv
$ sudo apt install python3-numpy
$ sudo apt install libgstrtspserver-1.0-0 gstreamer1.0-rtsp
$ sudo apt install libgirepository1.0-dev
$ sudo apt install gobject-introspection gir1.2-gst-rtsp-server-1.0
5. Run sample apps following directions in each app's README.
--------------------------------------------------------------------------------
Notes:
--------------------------------------------------------------------------------
As with DeepStream SDK, if the application runs into errors, cannot create gst elements,
try again after removing gstreamer cache
rm ${HOME}/.cache/gstreamer-1.0/*
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
import time
from threading import Lock
start_time=time.time()
fps_mutex = Lock()
class GETFPS:
def __init__(self,stream_id):
global start_time
self.start_time=start_time
self.is_first=True
self.frame_count=0
self.stream_id=stream_id
def update_fps(self):
end_time = time.time()
if self.is_first:
self.start_time = end_time
self.is_first = False
else:
global fps_mutex
with fps_mutex:
self.frame_count = self.frame_count + 1
def get_fps(self):
end_time = time.time()
with fps_mutex:
stream_fps = float(self.frame_count/(end_time - self.start_time))
self.frame_count = 0
self.start_time = end_time
return round(stream_fps, 2)
def print_data(self):
print('frame_count=',self.frame_count)
print('start_time=',self.start_time)
class PERF_DATA:
def __init__(self, num_streams=1):
self.perf_dict = {}
self.all_stream_fps = {}
for i in range(num_streams):
self.all_stream_fps["stream{0}".format(i)]=GETFPS(i)
def perf_print_callback(self):
self.perf_dict = {stream_index:stream.get_fps() for (stream_index, stream) in self.all_stream_fps.items()}
print ("\n**PERF: ", self.perf_dict, "\n")
return True
def update_fps(self, stream_index):
self.all_stream_fps[stream_index].update_fps()
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
import gi
import sys
gi.require_version('Gst', '1.0')
from gi.repository import Gst
def bus_call(bus, message, loop):
t = message.type
if t == Gst.MessageType.EOS:
sys.stdout.write("End-of-stream\n")
loop.quit()
elif t==Gst.MessageType.WARNING:
err, debug = message.parse_warning()
sys.stderr.write("Warning: %s: %s\n" % (err, debug))
elif t == Gst.MessageType.ERROR:
err, debug = message.parse_error()
sys.stderr.write("Error: %s: %s\n" % (err, debug))
loop.quit()
return True
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
import platform
import sys
def is_aarch64():
return platform.uname()[4] == 'aarch64'
sys.path.append('/opt/nvidia/deepstream/deepstream/lib')
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
import ctypes
import sys
sys.path.append('/opt/nvidia/deepstream/deepstream/lib')
def long_to_uint64(l):
value = ctypes.c_uint64(l & 0xffffffffffffffff).value
return value
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
Prerequisites:
- DeepStreamSDK 6.1.1
- Python 3.8
- Gst-python
To run:
$ python3 deepstream_demux_multi_in_multi_out.py -i <uri1> [uri2] ... [uriN]
e.g.
$ python3 deepstream_demux_multi_in_multi_out.py -i file:///home/ubuntu/video1.mp4 file:///home/ubuntu/video2.mp4
$ python3 deepstream_demux_multi_in_multi_out.py -i rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2
This document describes the sample deepstream_demux_multi_in_multi_out application.
This sample builds on top of the deepstream-test3 sample to demonstrate how to:
* Uses multiple sources in the pipeline.
* The pipeline uses `nvstreamdemux` to split batches and output separate buffer/streams.
* `nvstreamdemux` helps when separate output is required for each input stream.
Refer to the deepstream-test1 sample documentation for an example of simple
single-stream inference, bounding-box overlay, and rendering.
Nvstreamdemux reference - https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvstreamdemux.html
This sample accepts one or more H.264/H.265 video streams as input. It creates
a source bin for each input and connects the bins to an instance of the
"nvstreammux" element, which forms the batch of frames. The batch of
frames is fed to "nvinfer" for batched inferencing. "nvstreamdemux" demuxes batched frames into individual buffers.
It creates a separate Gst Buffer for each frame in the batch. For each input separate branch is created with the following elements in series
`nvstreamdemux -> queue -> nvvidconv -> nvosd -> nveglglessink`
So for two inputs, 2 separate output windows are created, likewise for N input N outputs are created.
The "width" and "height" properties must be set on the stream-muxer to set the
output resolution. If the input frame resolution is different from
stream-muxer's "width" and "height", the input frame will be scaled to muxer's
output resolution.
The stream-muxer waits for a user-defined timeout before forming the batch. The
timeout is set using the "batched-push-timeout" property. If the complete batch
is formed before the timeout is reached, the batch is pushed to the downstream
element. If the timeout is reached before the complete batch can be formed
(which can happen in case of rtsp sources), the batch is formed from the
available input buffers and pushed. Ideally, the timeout of the stream-muxer
should be set based on the framerate of the fastest source. It can also be set
to -1 to make the stream-muxer wait infinitely.
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# Following properties are mandatory when engine files are not specified:
# int8-calib-file(Only in INT8)
# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# ONNX: onnx-file
#
# Mandatory properties for detectors:
# num-detected-classes
#
# Optional properties for detectors:
# cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
# custom-lib-path
# parse-bbox-func-name
#
# Mandatory properties for classifiers:
# classifier-threshold, is-classifier
#
# Optional properties for classifiers:
# classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
# input-object-min-width, input-object-min-height, input-object-max-width,
# input-object-max-height
#
# Following properties are always recommended:
# batch-size(Default=1)
#
# Other optional properties:
# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
# mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
# custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
cluster-mode=2
[class-attrs-all]
pre-cluster-threshold=0.2
topk=20
nms-iou-threshold=0.5
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
Prerequisites:
- DeepStreamSDK 6.1.1
- Python 3.8
- Gst-python
- NumPy package
- OpenCV package
- GstRtspServer
To install required packages:
$ sudo apt update
$ sudo apt install python3-numpy python3-opencv -y
Installing GstRtspServer and instrospection typelib
===================================================
$ sudo apt update
$ sudo apt-get install libgstrtspserver-1.0-0 gstreamer1.0-rtsp
For gst-rtsp-server (and other GStreamer stuff) to be accessible in
Python through gi.require_version(), it needs to be built with
gobject-introspection enabled (libgstrtspserver-1.0-0 is already).
Yet, we need to install the introspection typelib package:
$ sudo apt-get install libgirepository1.0-dev
$ sudo apt-get install gobject-introspection gir1.2-gst-rtsp-server-1.0
Download Peoplenet model:
Please follow instructions from the README.md located at : /opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models/README.md
to download latest supported peoplenet model (V2.5 for this release)
To run:
$ python3 deepstream_imagedata-multistream_redaction.py -i <uri1> [uri2] ... [uriN] -c {H264,H265} -b BITRATE
For command line argument details:
$ python3 deepstream_imagedata-multistream_redaction.py -h
e.g.
$ python3 deepstream_imagedata-multistream_redaction.py file:///home/ubuntu/video1.mp4 file:///home/ubuntu/video2.mp4 -c H264
This document describes the sample deepstream-imagedata-multistream-redaction application.
This sample builds on top of the deepstream-imagedata-multistream sample to demonstrate how to:
* Demonstrates the use-case of face redaction and storing the detected objects (faces) in
cropped images to an "out_crops" dir in the present working directory
* Access imagedata in a multistream source
* Modify the images in-place. Changes made to the buffer will reflect in the downstream but
color format, resolution and numpy transpose operations are not permitted.
* Make a copy of the image, modify it and save to a file. These changes are made on the copy
of the image and will not be seen downstream.
* Extract the stream metadata, imagedata, which contains useful information about the
frames in the batched buffer.
* Annotating detected objects regardless of confidence level
* Use OpenCV to crop the image around a detected object (face class) and save it to file.
* Use multiple sources in the pipeline.
* Use a uridecodebin so that any type of input (e.g. RTSP/File), any GStreamer
supported container format, and any codec can be used as input.
* Configure the stream-muxer to generate a batch of frames and infer on the
batch for better resource utilization.
* Stream output to rtsp
NOTE:
- For x86, only CUDA unified memory is supported.
- Only RGBA color format is supported for access from Python. Color conversion
is added in the pipeline for this reason.
This sample accepts one or more H.264/H.265 video streams as input. It creates
a source bin for each input and connects the bins to an instance of the
"nvstreammux" element, which forms the batch of frames. The batch of
frames is fed to "nvinfer" for batched inferencing. The batched buffer is
composited into a 2D tile array using "nvmultistreamtiler." The rest of the
pipeline is similar to the deepstream-test3 and deepstream-imagedata sample.
The "width" and "height" properties must be set on the stream-muxer to set the
output resolution. If the input frame resolution is different from
stream-muxer's "width" and "height", the input frame will be scaled to muxer's
output resolution.
The stream-muxer waits for a user-defined timeout before forming the batch. The
timeout is set using the "batched-push-timeout" property. If the complete batch
is formed before the timeout is reached, the batch is pushed to the downstream
element. If the timeout is reached before the complete batch can be formed
(which can happen in case of rtsp sources), the batch is formed from the
available input buffers and pushed. Ideally, the timeout of the stream-muxer
should be set based on the framerate of the fastest source. It can also be set
to -1 to make the stream-muxer wait infinitely.
The "nvmultistreamtiler" composite streams based on their stream-ids in
row-major order (starting from stream 0, left to right across the top row, then
across the next row, etc.).
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=tlt_encode
tlt-encoded-model=../../../../samples/models/tao_pretrained_models/peopleNet/V2.5/resnet34_peoplenet_int8.etlt
labelfile-path=../../../../samples/models/tao_pretrained_models/peopleNet/V2.5/labels.txt
model-engine-file=../../../../samples/models/tao_pretrained_models/peopleNet/V2.5/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine
int8-calib-file=../../../../samples/models/tao_pretrained_models/peopleNet/V2.5/resnet34_peoplenet_int8.txt
input-dims=3;544;960;0
uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=3
cluster-mode=1
interval=0
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
[class-attrs-all]
pre-cluster-threshold=0.4
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.7
minBoxes=1
[class-attrs-1]
# disable bag detection
pre-cluster-threshold=1.0
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
Prerequisites:
- DeepStreamSDK 6.1.1
- Python 3.8
- Gst-python
- NumPy package
- OpenCV package
To install required packages:
$ sudo apt update
$ sudo apt install python3-numpy python3-opencv -y
To run:
$ python3 deepstream_imagedata-multistream.py <uri1> [uri2] ... [uriN] <FOLDER NAME TO SAVE FRAMES>
e.g.
$ python3 deepstream_imagedata-multistream.py file:///home/ubuntu/video1.mp4 file:///home/ubuntu/video2.mp4 frames
$ python3 deepstream_imagedata-multistream.py rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2 frames
This document describes the sample deepstream-imagedata-multistream application.
This sample builds on top of the deepstream-test3 sample to demonstrate how to:
* Access imagedata in a multistream source
* Modify the images in-place. Changes made to the buffer will reflect in the downstream but
color format, resolution and numpy transpose operations are not permitted.
* Make a copy of the image, modify it and save to a file. These changes are made on the copy
of the image and will not be seen downstream.
* Extract the stream metadata, imagedata, which contains useful information about the
frames in the batched buffer.
* Annotating detected objects within certain confidence interval
* Use OpenCV to draw bboxes on the image and save it to file.
* Use multiple sources in the pipeline.
* Use a uridecodebin so that any type of input (e.g. RTSP/File), any GStreamer
supported container format, and any codec can be used as input.
* Configure the stream-muxer to generate a batch of frames and infer on the
batch for better resource utilization.
NOTE:
- For x86, only CUDA unified memory is supported.
- Only RGBA color format is supported for access from Python. Color conversion
is added in the pipeline for this reason.
This sample accepts one or more H.264/H.265 video streams as input. It creates
a source bin for each input and connects the bins to an instance of the
"nvstreammux" element, which forms the batch of frames. The batch of
frames is fed to "nvinfer" for batched inferencing. The batched buffer is
composited into a 2D tile array using "nvmultistreamtiler." The rest of the
pipeline is similar to the deepstream-test3 and deepstream-imagedata sample.
The "width" and "height" properties must be set on the stream-muxer to set the
output resolution. If the input frame resolution is different from
stream-muxer's "width" and "height", the input frame will be scaled to muxer's
output resolution.
The stream-muxer waits for a user-defined timeout before forming the batch. The
timeout is set using the "batched-push-timeout" property. If the complete batch
is formed before the timeout is reached, the batch is pushed to the downstream
element. If the timeout is reached before the complete batch can be formed
(which can happen in case of rtsp sources), the batch is formed from the
available input buffers and pushed. Ideally, the timeout of the stream-muxer
should be set based on the framerate of the fastest source. It can also be set
to -1 to make the stream-muxer wait infinitely.
The "nvmultistreamtiler" composite streams based on their stream-ids in
row-major order (starting from stream 0, left to right across the top row, then
across the next row, etc.).
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# Following properties are mandatory when engine files are not specified:
# int8-calib-file(Only in INT8)
# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# ONNX: onnx-file
#
# Mandatory properties for detectors:
# num-detected-classes
#
# Optional properties for detectors:
# cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
# custom-lib-path
# parse-bbox-func-name
#
# Mandatory properties for classifiers:
# classifier-threshold, is-classifier
#
# Optional properties for classifiers:
# classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
# input-object-min-width, input-object-min-height, input-object-max-width,
# input-object-max-height
#
# Following properties are always recommended:
# batch-size(Default=1)
#
# Other optional properties:
# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
# mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
# custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=1
[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.7
minBoxes=1
#Use the config params below for dbscan clustering mode
[class-attrs-all]
detected-min-w=4
detected-min-h=4
minBoxes=3
## Per class configurations
[class-attrs-0]
pre-cluster-threshold=0.05
eps=0.7
dbscan-min-score=0.95
[class-attrs-1]
pre-cluster-threshold=0.05
eps=0.7
dbscan-min-score=0.5
[class-attrs-2]
pre-cluster-threshold=0.1
eps=0.6
dbscan-min-score=0.95
[class-attrs-3]
pre-cluster-threshold=0.05
eps=0.7
dbscan-min-score=0.5
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
Prerequisites:
- DeepStreamSDK 6.1.1
- Python 3.8
- Gst-python
To run:
$ python3 deepstream_nvdsanalytics.py <uri1> [uri2] ... [uriN]
e.g.
$ python3 deepstream_nvdsanalytics.py file:///home/ubuntu/video1.mp4 file:///home/ubuntu/video2.mp4
$ python3 deepstream_nvdsanalytics.py rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2
This document describes the sample deepstream-nvdsanalytics application.
This sample builds on top of the deepstream-test3 sample to demonstrate how to:
* Use multiple sources in the pipeline.
* Use a uridecodebin so that any type of input (e.g. RTSP/File), any GStreamer
supported container format, and any codec can be used as input.
* Configure the stream-muxer to generate a batch of frames and infer on the
batch for better resource utilization.
* Configure the tracker (referred to as nvtracker in this sample) uses
config file dsnvanalytics_tracker_config.txt
* Configure the nvdsanalytics plugin (referred to as nvanalytics in this sample)
uses config file config_nvdsanalytics.txt
* Extract the stream metadata, which contains useful information about the
objects and frames in the batched buffer.
This sample accepts one or more H.264/H.265 video streams as input. It creates
a source bin for each input and connects the bins to an instance of the
"nvstreammux" element, which forms the batch of frames. The batch of
frames is fed to "nvinfer" for batched inferencing. The batched buffer is
used as input for "nvtracker" which adds object tracking, which is then fed into
"nvdsanalytics" element which runs analytics algorithms on these objects.
This output is then composited into a 2D tile array using "nvmultistreamtiler."
The rest of the pipeline is similar to the deepstream-test3 sample.
The "width" and "height" properties must be set on the stream-muxer to set the
output resolution. If the input frame resolution is different from
stream-muxer's "width" and "height", the input frame will be scaled to muxer's
output resolution.
The stream-muxer waits for a user-defined timeout before forming the batch. The
timeout is set using the "batched-push-timeout" property. If the complete batch
is formed before the timeout is reached, the batch is pushed to the downstream
element. If the timeout is reached before the complete batch can be formed
(which can happen in case of rtsp sources), the batch is formed from the
available input buffers and pushed. Ideally, the timeout of the stream-muxer
should be set based on the framerate of the fastest source. It can also be set
to -1 to make the stream-muxer wait infinitely.
The "nvmultistreamtiler" composite streams based on their stream-ids in
row-major order (starting from stream 0, left to right across the top row, then
across the next row, etc.).
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# The values in the config file are overridden by values set through GObject
# properties.
[property]
enable=1
#Width height used for configuration to which below configs are configured
config-width=1920
config-height=1080
#osd-mode 0: Dont display any lines, rois and text
# 1: Display only lines, rois and static text i.e. labels
# 2: Display all info from 1 plus information about counts
osd-mode=2
#Set OSD font size that has to be displayed
display-font-size=12
## Per stream configuration
[roi-filtering-stream-0]
#enable or disable following feature
enable=1
#ROI to filter select objects, and remove from meta data
roi-RF=295;643;579;634;642;913;56;828
#remove objects in the ROI
inverse-roi=0
class-id=-1
## Per stream configuration
[roi-filtering-stream-2]
#enable or disable following feature
enable=1
#ROI to filter select objects, and remove from meta data
roi-RF=295;643;579;634;642;913;56;828
#remove objects in the ROI
inverse-roi=1
class-id=0
[overcrowding-stream-1]
enable=1
roi-OC=295;643;579;634;642;913;56;828
#no of objects that will trigger OC
object-threshold=2
class-id=-1
[line-crossing-stream-0]
enable=1
#Label;direction;lc
#line-crossing-Entry=1072;911;1143;1058;944;1020;1297;1020;
line-crossing-Exit=789;672;1084;900;851;773;1203;732
class-id=0
#extended when 0- only counts crossing on the configured Line
# 1- assumes extended Line crossing counts all the crossing
extended=0
#LC modes supported:
#loose : counts all crossing without strong adherence to direction
#balanced: Strict direction adherence expected compared to mode=loose
#strict : Strict direction adherence expected compared to mode=balanced
mode=loose
[line-crossing-stream-1]
enable=1
#Label;direction;lc
#line-crossing-Entry=1072;911;1143;1058;944;1020;1297;1020;
line-crossing-Exit=789;672;1084;900;851;773;1203;732
class-id=0
#extended when 0- only counts crossing on the configured Line
# 1- assumes extended Line crossing counts all the crossing
extended=0
#LC modes supported:
#loose : counts all crossing without strong adherence to direction
#balanced: Strict direction adherence expected compared to mode=loose
#strict : Strict direction adherence expected compared to mode=balanced
mode=loose
[direction-detection-stream-0]
enable=1
#Label;direction;
direction-South=284;840;360;662;
direction-North=1106;622;1312;701;
class-id=0
[direction-detection-stream-1]
enable=1
#Label;direction;
direction-South=284;840;360;662;
direction-North=1106;622;1312;701;
class-id=0
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# The values in the config file are overridden by values set through GObject
# properties.
[property]
#Width height used for configuration to which below configs are configured
enable=1
config-width=1920
config-height=1080
## Per stream configuration
[roi-filtering-stream-0]
#enable or disable following feature
enable=1
#ROI to filter select objects, and remove from meta data
#roi-RF=769;798;1046;772;1204;884;800;951
roi-C02=673;355;1198;365;1293;708;498;731;
#roi-RF2=769;798;1046;772;1204;884;800;951
#remove objects in the ROI
inverse-roi=0
class-id=-1
## Per stream configuration
[roi-filtering-stream-2]
#enable or disable following feature
enable=1
#ROI to filter select objects, and remove from meta data
roi-RF4=769;798;1046;772;1204;884;800;951
#remove objects in the ROI
inverse-roi=1
class-id=0
[overcrowding-stream-1]
enable=0
roi-OC=673;355;1198;365;1293;708;498;731;
#time threshold after which OC event will be triggered
time-threshold=2000
#no of objects that will trigger OC
object-threshold=2
class-id=-1
[line-crossing-stream-1]
enable=1
#Label;direction;lc
#line-crossing-Entry=772;799;819;946;623;952;1061;926
line-crossing-Entry=936;464;954;183;646;419;1231;420
#line-crossing-Exit=0;0;0;0;0;0;0;0;0
class-id=0
[direction-detection-stream-0]
enable=1
#Label;direction;
#direction-TowardsExit=200;500;200;1000
#direction-South=400;500;400;1050
direction-South=250;206;118;306;
direction-North=379;313;455;218;
#This will overwrite prev North config.
#direction-East=600;500;600;1050
class-id=0
[direction-detection-stream-1]
enable=1
#Label;direction;
#direction-TowardsExit=200;500;200;1000
#direction-South=400;500;400;1050
direction-South=250;206;118;306;
direction-North=379;313;455;218;
#This will overwrite prev North config.
#direction-East=600;500;600;1050
class-id=0
%YAML:1.0
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
BaseConfig:
minDetectorConfidence: 0 # If the confidence of a detector bbox is lower than this, then it won't be considered for tracking
TargetManagement:
enableBboxUnClipping: 1 # In case the bbox is likely to be clipped by image border, unclip bbox
maxTargetsPerStream: 150 # Max number of targets to track per stream. Recommended to set >10. Note: this value should account for the targets being tracked in shadow mode as well. Max value depends on the GPU memory capacity
# [Creation & Termination Policy]
minIouDiff4NewTarget: 0.5 # If the IOU between the newly detected object and any of the existing targets is higher than this threshold, this newly detected object will be discarded.
minTrackerConfidence: 0.2 # If the confidence of an object tracker is lower than this on the fly, then it will be tracked in shadow mode. Valid Range: [0.0, 1.0]
probationAge: 3 # If the target's age exceeds this, the target will be considered to be valid.
maxShadowTrackingAge: 30 # Max length of shadow tracking. If the shadowTrackingAge exceeds this limit, the tracker will be terminated.
earlyTerminationAge: 1 # If the shadowTrackingAge reaches this threshold while in TENTATIVE period, the target will be terminated prematurely.
TrajectoryManagement:
useUniqueID: 0 # Use 64-bit long Unique ID when assignining tracker ID. Default is [true]
DataAssociator:
dataAssociatorType: 0 # the type of data associator among { DEFAULT= 0 }
associationMatcherType: 0 # the type of matching algorithm among { GREEDY=0, GLOBAL=1 }
checkClassMatch: 1 # If checked, only the same-class objects are associated with each other. Default: true
# [Association Metric: Thresholds for valid candidates]
minMatchingScore4Overall: 0.0 # Min total score
minMatchingScore4SizeSimilarity: 0.6 # Min bbox size similarity score
minMatchingScore4Iou: 0.0 # Min IOU score
minMatchingScore4VisualSimilarity: 0.7 # Min visual similarity score
# [Association Metric: Weights]
matchingScoreWeight4VisualSimilarity: 0.6 # Weight for the visual similarity (in terms of correlation response ratio)
matchingScoreWeight4SizeSimilarity: 0.0 # Weight for the Size-similarity score
matchingScoreWeight4Iou: 0.4 # Weight for the IOU score
StateEstimator:
stateEstimatorType: 1 # the type of state estimator among { DUMMY=0, SIMPLE=1, REGULAR=2 }
# [Dynamics Modeling]
processNoiseVar4Loc: 2.0 # Process noise variance for bbox center
processNoiseVar4Size: 1.0 # Process noise variance for bbox size
processNoiseVar4Vel: 0.1 # Process noise variance for velocity
measurementNoiseVar4Detector: 4.0 # Measurement noise variance for detector's detection
measurementNoiseVar4Tracker: 16.0 # Measurement noise variance for tracker's localization
VisualTracker:
visualTrackerType: 1 # the type of visual tracker among { DUMMY=0, NvDCF=1 }
# [NvDCF: Feature Extraction]
useColorNames: 1 # Use ColorNames feature
useHog: 0 # Use Histogram-of-Oriented-Gradient (HOG) feature
featureImgSizeLevel: 2 # Size of a feature image. Valid range: {1, 2, 3, 4, 5}, from the smallest to the largest
featureFocusOffsetFactor_y: -0.2 # The offset for the center of hanning window relative to the feature height. The center of hanning window would move by (featureFocusOffsetFactor_y*featureMatSize.height) in vertical direction
# [NvDCF: Correlation Filter]
filterLr: 0.075 # learning rate for DCF filter in exponential moving average. Valid Range: [0.0, 1.0]
filterChannelWeightsLr: 0.1 # learning rate for the channel weights among feature channels. Valid Range: [0.0, 1.0]
gaussianSigma: 0.75 # Standard deviation for Gaussian for desired response when creating DCF filter [pixels]
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# Following properties are mandatory when engine files are not specified:
# int8-calib-file(Only in INT8)
# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# ONNX: onnx-file
#
# Mandatory properties for detectors:
# num-detected-classes
#
# Optional properties for detectors:
# cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
# custom-lib-path
# parse-bbox-func-name
#
# Mandatory properties for classifiers:
# classifier-threshold, is-classifier
#
# Optional properties for classifiers:
# classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
# input-object-min-width, input-object-min-height, input-object-max-width,
# input-object-max-height
#
# Following properties are always recommended:
# batch-size(Default=1)
#
# Other optional properties:
# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
# mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
# custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# Mandatory properties for the tracker:
# tracker-width
# tracker-height: needs to be multiple of 6 for NvDCF
# gpu-id
# ll-lib-file: path to low-level tracker lib
# ll-config-file: required for NvDCF, optional for KLT and IOU
#
[tracker]
tracker-width=640
tracker-height=384
gpu-id=0
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=config_tracker_NvDCF_perf.yml
#enable-past-frame=1
enable-batch-process=1
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
Prerequisites:
- DeepStreamSDK 6.1.1
- Python 3.8
- Gst-python
- NumPy package
- OpenCV package
To install required packages:
$ sudo apt update
$ sudo apt install python3 python3-opencv python3-numpy python3-gst-1.0 -y
To run:
$ python3 deepstream-opticalflow.py <uri1> [uri2] ... [uriN] <output_folder>
e.g.
$ python3 deepstream-opticalflow.py file:///opt/nvidia/deepstream/deepstream-<version>/samples/streams/sample_720p.mp4 output
This document shall describe about the sample deepstream-nvof-app application.
Optical Flow (nvof gstreamer plugin):
NVIDIA GPUs, starting with the Nvidia GPU Turing generation and Jetson Xavier generation,
contain a hardware accelerator for computing optical flow. Optical flow vectors are
useful in various use cases such as object detection and tracking, video frame rate
up-conversion, depth estimation, stitching, and so on.
The gst-nvof plugin collects a pair of NV12 images and passes it to the low-level
optical flow library. The low-level library returns a map of flow vectors between
the two frames as its output. The map of flow vectors is encapsulated in an
NvOpticalFlowMeta structure and is added as a user meta for each frame in the batch
using nvds_add_user_meta_to_frame() function.
Optical Flow Visualization (nvofvisual gstreamer plugin):
The Gst-nvofvisual plugin is useful for visualizing motion vector data.
The visualization method is simmilar to the OpenCV reference source code in:
https://github.com/opencv/opencv/blob/master/samples/gpu/optical_flow.cpp
The plugin solves the optical flow problem by computing the magnitude and direction of
optical flow from a two-channel array of flow vectors.
It then visualizes the angle (direction) of flow by hue and the distance (magnitude) of
flow by value of Hue Saturation Value (HSV) color representation.
The strength of HSV is always set to a maximum of 255 for optimal visibility.
This sample creates instance of "nvof" & "nvofvisual" gstreamer elements.
1) nvof element generates the MV (motion vector) data and attaches as
user-meta data.
2) nvofvisual element is used for visualizing the MV data using pre-defined
color wheel matrix.
3) It then obtains the flow vectors and demonstrates visualization of these
flow vectors using OpenCV. The obtained image is different from the visualization
plugin output in color, in order to demonstrate the difference.
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
Prerequisites:
- DeepStreamSDK 6.1.1
- Python 3.8
- Gst-python
- GstRtspServer
Installing GstRtspServer and introspection typelib
===================================================
$ sudo apt update
$ sudo apt install python3-gi python3-dev python3-gst-1.0 -y
$ sudo apt-get install libgstrtspserver-1.0-0 gstreamer1.0-rtsp
For gst-rtsp-server (and other GStreamer stuff) to be accessible in
Python through gi.require_version(), it needs to be built with
gobject-introspection enabled (libgstrtspserver-1.0-0 is already).
Yet, we need to install the introspection typelib package:
$ sudo apt-get install libgirepository1.0-dev
$ sudo apt-get install gobject-introspection gir1.2-gst-rtsp-server-1.0
To run:
$ python3 deepstream_preprocess_test.py -i <uri1> [uri2] ... [uriN] [-c {H264,H265}] [-b BITRATE]
e.g.
$ python3 deepstream_preprocess_test.py -i file:///home/ubuntu/video1.mp4 file:///home/ubuntu/video2.mp4
$ python3 deepstream_preprocess_test.py -i rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2
This document describes the sample deepstream_preprocess-test application
* Use multiple sources in the pipeline.
* Use a uridecodebin so that any type of input (e.g. RTSP/File), any GStreamer
supported container format, and any codec can be used as input.
* Configure the stream-muxer to generate a batch of frames and infer on the
batch for better resource utilization.
* Extract the stream metadata, which contains useful information about the
frames in the batched buffer.
* Per group custom preprocessing on ROIs provided
* Prepares raw tensor for inferencing
* nvinfer skips preprocessing and infer from input tensor meta
Note : The current config file is configured to run 12 ROI at the most. To increase the ROI count, increase the first dimension to the required number `network-input-shape`=12;3;368;640. In the current config file `config-preprocess.txt`. there are 3 ROIs per source and hence a total of 12 ROI for all four sources. The total ROI from all the sources must not exceed the first dimension specified in `network-input-shape` param
Refer to the deepstream-test3 sample documentation for an example of simple
multi-stream inference, bounding-box overlay, and rendering.
This sample accepts one or more H.264/H.265 video streams as input. It creates
a source bin for each input and connects the bins to an instance of the
"nvstreammux" element, which forms the batch of frames.
Then, "nvdspreprocess" plugin preprocessed the batched frames and prepares a raw
tensor for inferencing, which is attached as user meta at batch level. User can
provide custom preprocessing library having custom per group transformation
functions and custom tensor preparation function.
Then, "nvinfer" uses the preprocessed raw tensor from meta data for batched
inferencing. The batched buffer is composited into a 2D tile array using
"nvmultistreamtiler."
The rest of the pipeline is similar to the deepstream-test3 sample.
NOTE: To reuse engine files generated in previous runs, update the
model-engine-file parameter in the nvinfer config file to an existing engine file
NOTE:
1. For optimal performance, set nvinfer batch-size in nvinfer config file same as
preprocess batch-size (network-input-shape[0]) in nvdspreprocess config file.
2. Currently preprocessing only for primary gie has been supported.
3. Modify config_preprocess.txt for as per use case.
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# The values in the config file are overridden by values set through GObject
# properties.
[property]
enable=1
target-unique-ids=1
# 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=0
network-input-order=0
processing-width=640
processing-height=368
scaling-buf-pool-size=6
tensor-buf-pool-size=6
# tensor shape based on network-input-order
network-input-shape=12;3;368;640
# 0=RGB, 1=BGR, 2=GRAY
network-color-format=0
# 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
tensor-data-type=0
tensor-name=input_1
# 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
scaling-pool-memory-type=0
# 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
scaling-pool-compute-hw=0
# Scaling Interpolation method
# 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
# 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
# 6=NvBufSurfTransformInter_Default
scaling-filter=0
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
custom-tensor-preparation-function=CustomTensorPreparation
[user-configs]
pixel-normalization-factor=0.003921568
#mean-file=
#offsets=
[group-0]
src-ids=0;1;2;3
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=1
roi-params-src-0=0;540;900;500;960;0;900;500;0;0;540;900;
roi-params-src-1=0;540;900;500;960;0;900;500;0;0;540;900;
roi-params-src-2=0;540;900;500;960;0;900;500;0;0;540;900;
roi-params-src-3=0;540;900;500;960;0;900;500;0;0;540;900;
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# Following properties are mandatory when engine files are not specified:
# int8-calib-file(Only in INT8)
# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# ONNX: onnx-file
#
# Mandatory properties for detectors:
# num-detected-classes
#
# Optional properties for detectors:
# cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
# custom-lib-path,
# parse-bbox-func-name
#
# Mandatory properties for classifiers:
# classifier-threshold, is-classifier
#
# Optional properties for classifiers:
# classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
# input-object-min-width, input-object-min-height, input-object-max-width,
# input-object-max-height
#
# Following properties are always recommended:
# batch-size(Default=1)
#
# Other optional properties:
# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
# mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
# custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0
[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
Prequisites:
- DeepStreamSDK 6.1.1
- Python 3.8
- Gst-python
- GstRtspServer
Installing GstRtspServer and introspection typelib
===================================================
$ sudo apt update
$ sudo apt install python3-gi python3-dev python3-gst-1.0 -y
$ sudo apt-get install libgstrtspserver-1.0-0 gstreamer1.0-rtsp
For gst-rtsp-server (and other GStreamer stuff) to be accessible in
Python through gi.require_version(), it needs to be built with
gobject-introspection enabled (libgstrtspserver-1.0-0 is already).
Yet, we need to install the introspection typelib package:
$ sudo apt-get install libgirepository1.0-dev
$ sudo apt-get install gobject-introspection gir1.2-gst-rtsp-server-1.0
To get test app usage information:
-----------------------------------
$ python3 deepstream_test1_rtsp_in_rtsp_out.py -h
To run the test app with default settings:
------------------------------------------
1) NVInfer
$ python3 deepstream_test1_rtsp_in_rtsp_out.py -i rtsp://sample_1.mp4 rtsp://sample_2.mp4 rtsp://sample_N.mp4 -g nvinfer
2) NVInferserver
bash /opt/nvidia/deepstream/deepstream-<Version>/samples/prepare_ds_trtis_model_repo.sh
$ python3 deepstream_test1_rtsp_in_rtsp_out.py -i rtsp://sample_1.mp4 rtsp://sample_2.mp4 rtsp://sample_N.mp4 -g nvinferserver
Default RTSP streaming location:
rtsp://<server IP>:8554/ds-test
This document shall describe the sample deepstream_test1_rtsp_in_rtsp_out application.
This sample app is derived from the deepstream-test3 and deepStream-test1-rtsp-out
This sample app specifically includes following :
- Accepts RTSP stream as input and gives out inference as RTSP stream
- User can choose NVInfer and NVInferserver as GPU inference engine
If NVInfer is selected then :
For reference, here are the config files used for this sample :
1. The 4-class detector (referred to as pgie in this sample) uses
dstest1_pgie_config.txt
2. This 4 class detector detects "Vehicle , RoadSign, TwoWheeler, Person".
In this sample, first create one instance of "nvinfer", referred as the pgie.
This is our 4 class detector and it detects for "Vehicle , RoadSign, TwoWheeler,
Person".
If NVInferserver is selected then:
1. Uses SSD neural network running on Triton Inference Server
2. Selects custom post-processing in the Triton Inference Server config file
3. Parses the inference output into bounding boxes
4. Performs post-processing on the generated boxes with NMS (Non-maximum Suppression)
5. Adds detected objects into the pipeline metadata for downstream processing
6. Encodes OSD output and shows visual output over RTSP.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*
* SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: Apache-2.0
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// NvDsAnalyticsMeta
#include "bind_string_property_definitions.h"
#include "nvds_analytics_meta.h"
#include "../../docstrings/pydocumentation.h"
#include "pyds.hpp"
namespace py = pybind11;
namespace pydeepstream {
void bindanalyticsmeta(py::module &m);
}
\ No newline at end of file
This diff is collapsed.
DeepStream-Yolo @ 89089145
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
pyds_tracker_meta @ 5fe3b33c
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment