Saturday, April 11, 2020

CUDA Toolkit Interest...

A few hours ago, I installed nVIDIA's CUDA Toolkit 10.2 (SDK, IDE tools/extensions, etc.). However, I installed it so I could start playing around with CUDA-accelerated video processing and encoding. The issue is, nVIDIA's CUDA Video Encoding support was discontinued after CUDA Toolkit version 6.0, according to nVIDIA's documentation. But I need it for legacy hardware where NVENC either sucks to the point that it's not worth using, or is non-existent. I want to use a compute card (Tesla K10) as a dedicated video encoding device. The issue is, Kepler's NVENC encoder is trash when compared to everything on the market (1st gen), and makes NVCUVENC look amazing in comparison. NVCUVENCis the same tech I used to make a GTX 550 Ti capable of accelerated video encoding a few years ago, and it did just fine with realitime 720p60fps while playing modern games. So, imagine what an entire Tesla K10 with NVCUVENC can do for video encoding, since the encoding doesn't have to be done in realtime...


Here are my sources thus far:

  • https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/quadro-product-literature/TB-Quadro_VideoCaptureStreaming_v01.pdf
  • https://docs.nvidia.com/cuda/cuda-samples/index.html#new-features-in-cuda-toolkit-6-5
  • https://developer.nvidia.com/video-processing-software-solutions-nvidia-developers

If anyone can help with this (and not try to advertise NVENC to me instead), it would be greatly appreciated. I'll be primarily working on Windows 10 for now, due to lack of a GPU-accelerated Linux environment. However, I fully intend to bring it over to Linux if anything ever comes from it - assuming the idea doesn't get shot down first...

Tuesday, January 21, 2020

Some Things Die Hard...

https://linustechtips.com/main/profile/511347-tophatproductions115/?status=254848&type=status

Sunday, July 28, 2019

Saturday, May 18, 2019

So, Here Is the Situation...

If you're wondering about how I got the capitalisation down, here's me secret:



Hello there! Thank you for coming to read this blog post. I understand that you were interested in seeing what the situation is. But, before you commit to this, I must warn you. What you are about to read is quite lengthy, and will be best taken in when you're seated, with some decent time to spare. Rushing through this one will not do it justice, and you may miss important details. As such, please make sure that you have at least one hour to spare, so that you can get a good grasp of this. With that, I will now proceed.

Firstly, let's discuss the use of Google Drive. I currently have three Google accounts, each with their own 15GB Free plans, to manage differing sets of files. The primary one manages files for both College, major projects (TXP-VOS, TXP-Cloud, etc.), small media files (YouTube channel assets), and current finances. The other two are responsible for managing OS installation images, game ROMs, larger media files (though that is slowly being deprecated with Plex), and other sensitive files that are too important to discuss the contents of publicly (and thus they shall remain a mystery to you the reader). I also have a Mega.conz account, which is used to compliment and reduce the load on my Google Drive accounts (in addition to having encryption as a convenient side feature), should I need to either make another file mirror or make large files more accessible when they are too big for the 15GB Google Drive storage. But, there is at least one issue with this setup - what happens if the Internet goes down?

Secondly, let's talk about the Internal Networking issue. I currently have two MacBooks (a MacBook4,1 and a MacBookPro4,1), a Windows laptop (HP ProBook 6475b), a Windows workstation (Precision T7500), and a decommissioned 2-in-1 laptop (tx1305us). The MacBookPro4,1 has thermal issues, needs a re-paste, and is running lethargically with a Core2Duo in High Sierra. It acts as the only local mirror for the online services listed above, and its AirPort/Wi-Fi adapter doesn't even work in High Sierra. It was brought up to date using third-party means, and all the care in the world won't save it if Apple decides to eliminate support for machines with either Core2 or DDR2 (which wouldn't surprise me).

It was originally meant to act as a more powerful alternative to my smaller MacBook, running El Capitan (also brought up to date using third-party means). Anything that couldn't be run (or stored) on the poly-carbonate one could be instead loaded onto the more powerful, and more spacious, MacBook Pro - or so I thought. The better on-paper specs did not result in noticeable, real-world performance deltas that I could reasonably take advantage of. And High Sierra is a RAM hog, unlike its predecessor El Capitan (which gets so unkindly called Crapitan, although it runs better than High Sierra ever will on lower-end machines). But it's there as a necessary evil. Due to the task the MacBook Pro is assigned, it is a liability for it to ever leave the house in most instances - which leaves it acting as a make-shift hot-box of a file server.

Thirdly, let's discuss project management and productivity. The other MacBook is used for programming/code management and video editing. Not due to performance (a Core2 is not high performance by any means), but because of reliability, and monetary concerns. The tools that I've used on Windows, thus far, have been very unreliable with the MP4 video that I export and upload, causing audio to come out of sync with video, lowering the resolution unexpectedly, and more. They also happened to be janky to use, and a pain to find the functions/features I needed at times - which only serves to decrease my productivity even more, when I'm busy redoing an export (due to the previous issues that I mentioned). In addition to this, anything that doesn't do this will usually cost me enough to simply warrant buying still better software, like Final Cut Pro and the likes.

And as for code editing, the one time when I did have a horrendous issue (corrupted project file - forced to rebuild everything from scratch) was while using Windows, on what should have been a more-than-capable Core i5. So, performance is not my primary concern in this arena. It is consistency, reliability, and cost. As such, the heavy video editing and programming tasks that I do tend to be saved for the MacBook. It also happens to have the benefit of being portable, for when I have to go out in the middle of a work session. On a side note, the HP tx1305us was meant to be a media consumption device, and was dual-booted with Elementary OS. But its poor thermals led to it being more of a liability than anything after a few years. It served its purpose well while it lasted. Alas, it was replaced with a Nexus 7 flo - 2013, and is set to be parted out and sold off in due time.

The tablet, on the other hand, is a good remote access terminal - with root, since I like doing advanced tasks with it (when I don't have time to pull out a laptop). Along the same line, the HP ProBook is a decent one as well - but with way more compute to spare. It's used for general tasks when I'm outside the house, that can't be accomplished (or done well) in either MacOS or Android. It also can take an eGPU when I want to game away from home (and don't wish to remote access the T7500). It's a versatile remote console for many of my purposes, and has been the one device to survive through my high school years. It is still my main laptop to this day, and also has the perk of being easily user-serviceable. Strangely enough, I have two machines managing my music library, via iTunes/MusicBrainz Picard. The MacBook4,1 and the HP ProBook handle the task, with the ProBook using an older version of iTunes (due to compatibility reasons). I leave online music purchases to the MacBook4,1.

Fourthly, let us talk about pre-deployment testing. The Xeon rig that used to be my primary workstation, before I upgraded to the pair of X5680's I have now, is being used to test software and OS's before I incorporate them into my primary workflow. As of now, it's being used to test-run Manjaro. This task is something I do not wish to have running on mission-critical hardware unless absolutely necessary, and would be isolated to a VM (with no way to test graphical acceleration) if I didn't have a spare PC to throw it on. I've been meaning to start such trials much sooner, but the lack of spare time made it difficult - until I finally made it half-way through this past semester (holidays and such). This is a required task, since I'd otherwise be forced to use my mission-critical workstation and laptop(s) for this testing - which requires way more time and preparation to do properly. And 2k USD worth of workstation is no laughing matter. Seeing a BSOD on that thing is like seeing Big Foot...

Next, let's go over Task/Service Balancing across devices. The Precision T7500 is the workstation/makeshift server that handles the worst tasks I do. Anything that is simply too much for the laptops gets tossed onto this machine. And it handles a lot of tasks. In addition to acting as an optional DNS server, it also handles tasks like (but certainly not limited to) Plex Media Server, storing backups for multiple devices, syncing content from Steam and other game stores, occasional F@H, SVP4 Pro, and HD livestreaming. I was supposed to get into hosting multiplayer gaming servers and movie nights (via my Plex server) later this year, but I'm delaying that until I have the resources to do that properly. As of now, the T7500 is the most important computer in my fleet, with an expectation of near 24/7 uptime required of it (same for the MacBookPro). A single minute of downtime spells trouble for most of the tasks I do, which is where part of the issue is - it's a workstation trying to be a server...

In addition to that, only the Windows PCs are actually equipped with 1Gbps Ethernet. The MacBooks are stuck with 100Mbps, which is a pain for downloads and transfers of large files. Nevertheless, I would totally go for a 10Gbps router/AP to replace my current router/extender, because that thing is 100Mbps - which is garbage. In fact, it's the only reason that I've currently settled for using a "Fast" Ethernet switch at the moment - because the majority of my devices aren't even Gigabit-ready. So, I already have a network limitation on my hands.

And all of this leads to the discussion of Overall Ease of Management, Maintenance, and Servicing. Anyone who has seen my numerous status updates in recent past have seen a glimpse of what it's like to use and service (and possibly even initial setup of) these machines while they're at normal operational capacity. The few hiccups and glitches that I do reveal are primarily associated with machines that aren't the workstation (which is a good thing). And while the status updates make it look like it only takes a minute, it's actually quite the opposite. I manage inventory for spare parts and thermal paste. I watch regular performance counters for signs of possible dying components. I install updates and patches, along with BIOS updates. I even update individual applications if necessary (and each computer is rockin' more than 40 apps each). And then I test for basic functionality right afterward, to make sure that nothing broke as a result of any recent changes. This kinda leads to some disappearing weekends and holidays, which I grew to accept as time went on. Seeing the issue yet?

My entire fleet is a scattered mess. Heterogeneous is putting it lightly, since the amount of repair and management effort required to keep things going is quite large, and the returns on it are small. At this moment, I could have up to 6 dissimilar devices that need to be plugged in at any moment, to fulfill a number of tasks that probably should be consolidated to one, more powerful, more efficient machine. When I go to take care of one, I get up and walk to wherever it is in the house, plug it in, and get to work. Then, when I get to a good stopping point (not necessarily done), I get up and go to the next machine on the list. While remote desktop access does reduce this a little bit, it's still easier to work on the machine when it's right in front of you in some cases. Especially when troubleshooting weirder issues, that may involve faulty hardware. The management costs increase as these computers age over time, and typical software support cycles for Apple laptops have it set so that almost half of my current fleet will possibly be obsolete in the next major version change. Triple-booting the workstation isn't an option because that causes unnecessary downtime, and can spell trouble if something goes wrong during an update for one of the OS's.

The only reason why I could even attempt to get away with this is because the workstation has multiple drives, but then that introduces the issue of unavailability during power cycles, which are almost 5 minutes and up for this T7500 (yes - it's that slow to boot). Not to mention the finicky networking situation that I currently have with my current router, since it likes to randomly disconnect from the main AP, rendering my remote access inoperable when it has a fit. And to top it all off, just try calculating to power requirements for them all running at full power on a bad day...

Back in 2018, I was experimenting with ESXi v6.5, on the T7500, which gave me a glimpse into a solution to everything. Using a single virtualisation server to consolidate all of these tasks would decrease maintenance times, costs, and effort. To the point where I could actually go back to having completely free weekends and holidays for the better part of the year again. A decent enough Type 1 Hypervisor could replace the MacBookPro4,1, the Xeon X5472 (pre-deployment testing) rig, and even my current workstation (T7500) with a group of convenient VMs, sitting behind a KVM switch for ease of use. And, possibly with less power draw than the combination of all my currently-running 24/7 machines as of yet. With (somewhat) dynamic allocation of resources, I could easily decide what VM(s) and tasks needed more computational power, on the fly, while suppressing other, less critical operations until the required tasks were complete. Most modern options that can run ESXi 6.7 also happen to come with 10Gbps NICs built in.

To add to this, a single, more powerful computer would also work better with my current space limitations and power availability, seeing that whatever I would have replaced the T7500 with needed to fit in the same corner/desk space that it occupies. So, having anything that required a separate unit (ie., a dedicated NAS that isn't rack-mounted or stackable) would probably be out of the question. And if I did go with a rack-mounted solution, I could just have the monitor and keyboard attached to the side of the chassis with the proper equipment! The speakers and mouse could very well sit on the same platform as the keyboard. Whatever networking equipment (switches and routers) I buy next would also be specifically for that rack-mount environment as well, for optimisation reasons. In other words, a Hypervisor server would have been the perfect solution.

With my realisation of this reality a few months ago, I decided to start a new (and currently shelved) initiative - Project Personal Datacentre. And the worst part is, the internship that my parents and I were pushing so hard for stuck me on a wait list. So, wasted time - gone. I'm now left holding the grenade, and need to find a job fast. I guess I'm back in the fast lane? Nothing is certain anymore, and I'm just hoping that they won't pull anything else that isn't in the cards, because I want some control back this year. May just go back to Amazon if this doesn't get settled by the end of the month. At least I know I'll be receiving 15USD/hr. And my next semester is a part-time one, with only one to two classes at most. Gotta make the most of it...

Saturday, December 8, 2018

Rough Semester, and Content Season (as of late)

More delays came up, sadly:

  • https://www.youtube.com/channel/UCYGiDicmL49nVFUojYWKm6g/discussion?lc=Ugwr4QeOK-J4y09NKGx4AaABAg
Hopefully, I will be able to replace the stolen hardware in due time. Until then, new content will be coming in slower...