User Tools

Site Tools


video

Video:

XTension is not a DVR or NVR but can be used to do some of the functions of each. The main recommendation for video in XTension is to have a relatively slow frame rate video available for embedding into views and web interfaces. You can also record this video to local drives on the Mac. There is a limit to how many 30fps streams of 1080p or larger video can be handled without significantly impacting other processes on your machine. In general I recommend that you stream a reduced framerate into XTension for viewing and recording and use the built in recording in the cameras to record the full size and full frame rate video. The smaller and less CPU intensive files will show you enough to know if you need to go to the camera for the full resolution versions.

In the olden days Video in XTension was handled by the Video Pitcher app. This provided a hardware accelerated interface to decoding the streams, processing the video, sending it to XTension to be used in interfaces, and also recording it. Unfortunately Apple made many changes to the underlying libraries I used to develop it and it did not make sense to continue to develop it. A single application was still only using a single CPU for most of it’s work so a helper app was launched in the background to do the processing of the streams but here too we quickly became limited by bandwidth and CPU/GPU speed between all the parts. This program still works sand is still supported by XTension while it has been deprecated in favor of the newer Video API’s available in separate plugins.

The new video plugin system consists of a separate plugin for each stream type as the interfaces and under the hood handling of them can be very different. There are not stream plugins for specific camera types, only the specific connection type so they can be used with anything that supports that including much older legacy systems that you might still be using.

Those plugins can then load camera API plugins to provide an interface to the controls of specific cameras or brands of cameras. This allows you the most flexibility in getting the video into XTension and still having control of the camera features though scripting in XTension. I can add more stream types and more camera API support without having to change any of the existing ones now.

Note: the new video system requires a more recent OS version than is required by the program overall. You must be running at least MacOS 10.15 Catalina in order to use these plugins.


As of this writing there are 3 stream plugins and 3 camera API plugins but more of both are coming. They are all included as part of the default XTension install.

JPEG Refresh Plugin:

The JPEG Refresh Plugin is probably the oldest of the ideas for getting video into XTension. Back in the days of propriatary camera systems it was often a lot of work to figure out the protocol they were using to send video. Or you simply did not need a high frame rate video feed. This also works for any source of images that you might want to get into XTension and record.

MJPEG Plugin:

The MJPEG Stream Plugin connects to anything that sends an “mjpeg” or motion jpeg stream. Many older cameras and other systems use this. It is an http image request that just keeps sending one after another to replace the previous. All browsers now support this. Most modern HD cameras no longer will send you an mjpeg but it is in heavy use by many of the Raspberry Pi security camera systems. It is also what most SD older cameras supported.

RTSP Plugin:

The RTSP Plugin is a more modern protocol supported by most HD cameras. Usually no other configuration information or urls need to be known other than the port that the camera is running it on. This kind of stream is more resource intensive on the XTension machine as it requires helper apps to decode and manage the low level protocol, which usually is H264 encoded for better video with lower bandwidth than continually sending every frame as a jpeg like the MJPEG plugin above.


The Video Encoder Service:

If you are going to do any recording you will need to have a single instance of the Video Encoder Service plugin running as well. Only one instance is supported and creating more will not speed up the encoding. The video encoder service has no options to set at this moment so the interface to setting it up is simple.

In order to not overtax the CPU while recording the data is saved to a temporary file on disk in whatever the most convenient format is, usually the format that the data was sent in to avoid too much unnecessary processing. When the recording ends, or when we reach the end of a snippet file length, the file is closed and queued to the Video Encoder Service plugin which encodes it to H264 and creates the final output. For short files this can happen very quickly for higher framerate or longer files this can take some time. It is important to make sure your system is not getting behind in encoding or you might never catch up. It is generally not necessary to record constantly from a video source but only when the system has some other way to sense what it happening, or the camera itself senses motion or an audio event or other such thing. But be sure to check this to be sure it can keep up with your normal recording amounts.

The background helper app that does the encoding takes good advantage of the GPU on both Apple Silicon and older Intel chips and so runs as efficiently as possible. As an aside you will see the app startup in the dock as if it was an app with an interface and not a true background app. This is temporary as there were some issues with some of the av foundation libraries that kept me from doing what I needed to with a fully background app. Just ignore the movement in the dock when this is working. Since it was there I did add showing a progress bar to the dock icon. The progress and the number of files queued is also displayed in the Interface status display in the Interface Status window.

The encoder helper app is CPU friendly and should not cause too much extra load on a system unless it is very overloaded. If it starts to be a problem for anyone please let me know as I can add an option of running it at a lower priority to the system than it currently does which should allow other things more CPU though it will extend the encoding time of any given file.


Options Shared By All Plugins

There are several settings that are the same for all the video plugins. Those will be documented here with everything else being discussed on the specific wiki pages for the plugins.

Stream Name:

In most cases you would want the name of the plugin instance in the Interface List window to be the same as the name of the stream. This is also the name you would use when sending script commands to the camera and see in all the popups or lists of available video sources. If you wish to have a different name for the video stream than you do for the plugin instance you can enter a different name in this field and then that will be used in all the video lists and user interface elements. You must still use the regular plugin name when sending scripting commands to the plugin instance.

Recording FPS:

The first recording settings field is for the Recording FPS. In the case of a JpegRefresh this is how often the plugin will request another image. In the case of the RTSP plugin this will ask the helper apps to reduce the framerate that is being received before sending them up to XTension. For the MJPEG plugin this field is instead the “skip frames” counter. If you have a 10fps source and specify it to skip 1 frame that will reduce it to 5fps skipping every other frame. This still sends all the data to XTenison though so if possible you should use settings in the link or in the camera to set the streams to the framerate you actually want to process in XTension.

Playback FPS:

Just because you recorded 1 frame a second does not mean you want it to play back at that speed. You might wish to set the playback FPS to something like the default of 8 which will give you a fast motion version of whats going on to get through more quickly. This affects only the setting inside the recorded movie file and not anything about display of the live stream in XTension.

Snippet Length:

If recording for more than the Snippet Length it will split the recording to a new file and send the previous one to the Video Encoder Service for encoding. This value is in minutes and defaults to 5 minutes. Longer files will result in longer encode times before a video is ready, shorter times will result in many more movie files to sort through.

Preroll Frames:

Setting some number of preroll frames saves that many frames in memory and when a recording is started in response to either the Manual Recording unit being turned on in XTension or use of the record from verb that stored video will get written to the file first. In case it takes a while for the camera to connect or the motion sensors dont go off until after some event has started you can keep the last however many frames in memory and actually record before the event that starts the recording.

This has some implications to consider however and you should not use this without an actual need. All the frames that you queue up are kept in rolling memory. That can be a LOT of memory for larger video streams with faster frame rates. There is a definite hard limit to how much video you can store this way on any given machine. The other implication is about CPU usage. For any stream not using the preroll if no interface is displaying it and no recording is being done, the stream is closed. This frees up those resources as the plugin is basically doing nothing until you tell it to start recording or open the web interface to look through it. If you set up a preroll it must keep the stream open and running all the time to keep this buffer of frames up to date.

One useful side effect of that is that if you have a camera that is slow to respond to a connection or that has a significant latency to it you can force it to keep the stream open by setting this to just 1 frame. That won’t use much memory, but will have the stream already open and running when you go to look at it or record it. Many RTSP streams suffer from long connect times before the video starts to flow and that can be reduced or eliminated by setting this to just 1 frame for them.

Thoughts On Decreasing Video Latency:

The latency when starting or watching an RTSP stream can be significant depending on the camera type and the streaming settings. It is usually much less for a camera using an mjpeg stream as they can just start with a fresh image immediately. For an H264 stream like is inside most rtsp connections the first frame cannot be sent upstream to XTension until it gets the first key frame. H264 and other video compression schemes do not necessary send you a key frame when you initially connect and you have to wait till it does before the stream can actually start. This is generally settable in the camera settings and it is possible to reduce this quite a bit by decreasing the interval between keyframes. This increases the video bandwidth somewhat but usually some tradeoff between the two is possible. I can’t say exactly how this setting will be labeled as it is different depending on the camera manufacturer and model. On the Amcrest model that I am testing with it is the “Frame Interval” setting on the on the Video Settings tab of the setup pages. A larger number decreases bandwidth but increases latency when starting the stream. For example: If I have the FPS set to 5 and the Frame Interval set to 10 then I would expect a keyframe every 2 seconds. When connecting to the camera it will add anywhere from 0 to 2 seconds before I see anything. I’ve reduced that from the default which I believe was 20 and 4 seconds was a long time to wait for the stream to start.

Record In Real Time:

Normally a recorded file is saved in a temporary format and when complete passed off to the Video Encode Service plugin for encoding in the background. If you wish to decrease the amount of time before a snippet of recording becomes available or if a specific camera has such a large image or a faster FPS that it would backup the queued encoding of other videos too much you can opt to encode it in real time. This requires the starting up of another instance of the helper encoder app in the background and instead of it processing a file it processes the frames in real time as they arrive. Obviously this has impacts on CPU usage and memory usage though in my testing it isn’t that bad unless you’re trying to do it for a dozen streams at once or are running on a very elderly underpowered Mac.

Recording Folder:

Click the “Select” button to choose where you want to store any video files that you might record. Each video stream should be set to record into a separate parent folder so that they don’t overlap each other or confuse things.

Minumum Disk Space:

When disk space on the volume that holds the recording folder drops below this level the saved video files will begin to be pruned with the oldest being deleted first until the space is back to the available minimum. No files outside of the selected recording file will be deleted or touched in any way. Use notation like “5G” for keeping 5 Gigabytes available or “1T” for 1 terrabyte and so forth. Note that the scanning is not done in real time and other streams or programs may be using the same disk. You’ll want to keep enough of a buffer that it doesn’t run out while recording between now and the next scan of the disk. Don’t set lower than a few Gigs at least and if you’re using the boot drive make sure the system keeps a lot more available for it.

Delete If Older Than:

In addition to the minimum disk space scanning you can also prune the files based on how old they are. If keeping a month of video recordings is enough then set this to 30. If you need a years worth of them (assuming enough disk space) then set it to 360. Leave it set to 0 if you wish to let the disk fill up until the minimum disk space is reached and have it start deleting then.

Annotation Date Format:

Many of the informations associated with the video including time stamps and information overlays (which are not working yet but are coming in a future version) will use this format for the time and or date that you wish to display. You can use standard Python date formatting options that I describe here: Python strftime Cheatsheet


Notes On Disk Usage:

Most computers now days use SSD drives which are wonderfully fast but do not last forever under heavy use. Recording video all the time uses a lot of disk sectors over and over again slowly reducing their lifespan faster than they would normally. I would recommend that video recording be done to external drives, preferably of the spinning platter type, or at least SSDs that are not internal to the Mac and therefore irreplaceable. Since the plugin instances each have their own recording directory you could easily place each video camera on a separate external drive if you wished. I would not recommend recording video to the internal drive of a modern Mac.


Script Handlers:

Stream Stalled Handler:

All Camera types implement a script handler that is called when the stream stalls. It is passed a count of the attempts to recover since it went down. This way you can ignore a single stall that is recovered by closing and opening the connection again but can take other measures to recover if the camera does not recover on the first try.

Use the “Insert…” toolbar item on the Edit Interface Script window to insert the template for the event.

(*		 S T R E A M   S T A L L E D
	
	This handler is called when a camera stops sending frames. StallCount is passed that is the count
	of stalls in a row. This way you can tell if a camera is not recovering and take other action like
	rebooting or power cycling it if needed. The count will increase until the interface is restarted or
	the camera starts responding again.

        By checking for a mod 15 you will attempt to restart the camera only every 15 times it tries to restore
        and so will not just keep rebooting it every 10 seconds and potentially never let it actually come 
        back up again. 
        
        The “reboot()” command comes from the Camera API for Amcrest Cameras, but others have similar commands.

*)

on streamStalled( stallCount)
	write log "Attempted to restore camera " & stallCount & " times.” color red
        
        if stallCount ≠ 0 and stallCount mod 15 = 0 then
            write log “Attempting to restart camera due to extended outage” color red
            reboot()
        end if

end streamStalled

Camera API Plugins

The camera API can be selected separately from the type of connection method you use to get the video. Each video plugin type has a popup to select from the available camera API’s as well an API Port field. In the case of an RTSP stream, the camera api may be running on a different port than what is set in the stream URL. For example if the camera API and interface are running on port 80 but the stream is an rtsp stream and connecting on port 554 the please add 80 to the API port field.

For most streams it makes sense to embed the user and password, if any, into the URL itself. The Camera API may not work that way and may require that you also enter the information into the separate fields, check the “Send Authorization” checkbox and select the proper authentication type. Cameras are fickle about what they support this way so rather than try to auto detect you can select either Basic or Digest. Most modern cameras and all Amcrest cameras running any recent firmware version (like less than 10 years old or so) will require a digest authentication so try selecting that type first.

The Camera API plugins do not yet create any regular user interface to the camera settings, but rather support scripting commands to do whatever you wish to with the camera. Those commands differ between camera manufacturer and model so much that no standardized interface is really possible or practical, except with Onvif which is coming in future versions.

By default, if possible, the plugins all try to connect to the cameras event stream system to get at least Video motion and sound detection into units in XTension. In some cases many more events can be attached to Units. Modern Amcrest cameras will let you set multiple “Regions” in the motion detection all of which can be set to show up in XTension Units in addition to the overall video motion detection unit. If supported by the camera PTZ controlling features are also part of these plugins. This will be getting more detailed interfaces when the new Web Interface dashboard plugins are ready.

Please see the individual wiki pages for the commands and features supported by the different camera types.

/home/e805485/machomeautomation.com/data/pages/video.txt · Last modified: 2023/05/12 14:56 by James Sentman