Featured

Keeping data safe and accessible

During a recent emergency involving an electrical fire, the physiological idea of what minimal items I should save brought up a topic relating to data security, protection, and redundancy within my home data-center environment. In the event of a sudden catastrophic hard drive failure, the likelihood of being able to do home data recovery is slim. How can I add redundancy to a home server in the event of a single or multi drive failure? I’ve created a series of countermeasures by adding redundancy via 1) Raid level, 2) Rsync and 3) periodic manual cloud or local sync.

Each of my presented solutions to the underlying problem is adjustable for specific hardware and physical situations where it matters, such as to filesystems, and transfer protocols. Each solution has one or more negatives that may or may not be affected by adding more variables.

  1. correctly selecting a raid level that achieves the required reads/writes, supports the amount of hard drives allocated, and either mirrors, stripes, or does some combination of the two to allot data across them seamlessly as needed for the use case. As a static example: Raid 1 is mirroring, so it certainly allows redundancy, and with the combined read speeds, it will kick up the arrays performance. This can be used in a two drive high data priority setting.
  2. What good does data redundancy do if all the hard drives fail, or are stolen? The next level of data protection is Rsync. Rsync creates a mirror of two servers in separate physical locations and can be adjusted to send a servers internal data to a location outside of the network. This has the apparent disadvantage of needing two physical server machines, on two separate networks to get working. For a home situation having two physical yet identical servers is ridiculous for %99 of users, so with a virtual private server, or with a cheap single board computer with a NAS, mirroring two servers can be added as a second layer countermeasure to catastrophic data failure.
  3. periodic manual backups while the least elegant solution, have saved myself from countless drive failures. As for manual backups, a physical external hard drive can be used to quickly and securely backup data on failing storage, so new hardware can replace the faulty or failing ones. However, sometimes external drives are being put to use, or are otherwise absent. Free cloud storage can be taken advantage of. If your education provider uses the google suite then luckily you have access to unlimited cloud storage for as long as you are enrolled. By taking advantage of free cloud storage you can have peace of mind about the integrity of the files put up there for as long as you are enrolled. Some “free” cloud providers hold files hostage by not letting you take them off the cloud until you pay a fee or purchase a subscription. Staying away from fraudulent cloud providers should be a priority to insure the files will make their way back to you down the road. Backing up data onto Google drive or Dropbox is a good idea as a non-primary method to personally ensure data can be accessed later, However, the speeds and autonomization of these protocols are always limited to the digression of the cloud provider. Linus Tech Tips uploaded an in depth video relating to taking advantage of seemingly unlimited cloud storage:
Featured

How and why you should optimize your WIFI network

If you live in an urban sprawl like I do, then you’ll know the sheer density of Wi-fi signals bouncing around at any given time. But have you ever considered the negative implications of being in a Wi-fi dense area? If you’re running into wireless connectivity problems like I used to frequent, then it could very much be caused by wireless overlap and can be fixed with a few tweaks of your router / wireless access point (WAP) settings. To start troubleshooting this common network hiccup, you must first evaluate your network environment by asking a few questions:

  • Do I have more than one router in a single area? If so, how many SSID’s are broadcasted? some routers prefer to combine 2.4g and 5g into a single SSID while some older ones broadcast them separately. (usually with different passwords)
  • Out of the router(s) in my situation, are all of the WIFI networks necessary? If not then they should be disabled.
  • If more than one router is necessary in a small area, could the WIFI channel be changed on them individually to reflect not only the needs of the router, but also the network noise/environment in that area.

Many home gateways support changing the WIFI band for noisy network environments. If the demand of bandwidth is unexpectedly exceeding the capabilities of your wireless network, the most probable culprit is crowded channels.

How do you determine what channel to broadcast your wireless network onto? Never ever pick a random one, interference is caused by nearly anything that has electronics in it, and after all there is 11 for 2.4g and 45 for 5g. In short you need an RF scanner. A significant portion of modern routers have one built in, and can automatically determine the clearest channel. Some routers will determine this automatically upon first boot, or after a factory reset. However, sometimes a manual adjustment or additional RF scan will be needed down the road after adding more and more wireless devices to multiple networks.

What about DNS?

If you wind up in a situation where it takes a concerning amount of time to access a website, say google.com, then it is possible your Domain Name Server (DNS) is configured incorrectly. What do I mean by incorrectly configured DNS? That simply means that the server that is responsible for resolving the domain names of all the websites you visit is taking too long, timing out, or even failing to resolve names altogether. Per device trusted DNS settings, or router configured upstream trusted DNS servers such as cloudflare’s or OpenDNS’s servers could fix this. But why is this a problem? Sometimes the Internet service provider uses their own DNS servers, that drastically slow down due to excessive load and other common maintenance problems. Comcast “Xfinity” DNS servers are notorious for this.

Works cited

Moren, Dan. “Find the Best Wi-Fi Channel with Wireless Diagnostics.” Six Colors, 19 Mar. 2015, 11:02, sixcolors.com/post/2015/03/find-the-best-wi-fi-channel-with-wireless-diagnostics/.

Featured

Ethics of digital rights management

Do you really own the products you buy? It seems more and more apparent that tech companies discourage tampering and non-authorized repair, and in the case of DVD’s shouldn’t the initial cost of a movie cover unlimited use of it? In reality due to digital rights management the licenses’ distributed to the end user only cover mostly-unlimited use within the terms pre-defined by the companies whom profit off the end users limited legal access to the content they rightfully own. Despite the piracy warnings that played in the beginning of movies on digital versatile discs, as a consumer of content, and owner of the physical media license, copying and digitalization of discs shouldn’t be criminal if the intention isn’t to distribute it. I can support this claim by providing use cases that back up the need to copy or digitalize content that legally forbids it:

  • Ripping DVD’s onto a personal media server, so they can be streamed when away from home.
  • Ripping media that can be upscaled with modern AI upscaling techniques to give the appearance of a higher resolution.
  • Reducing the size of a library of DVD’s by backing them up onto network attached storage.
  • Converting .mpeg files to a more reliable and futureproof file format.

Understandably, there’s no reason you should have to repurchase a movie when you inevitably switch platforms and upgrade to a media machine that doesn’t have a disc drive. It’s unfair as an end user to have to put up with content protection when piracy as whole has become completely and ironically victimless. As discs fade away one has to wonder why Hollywood’s lawyers cling onto the notion that discs are not only relevant in this day and age, but also that they must be protected as if they were. Even if the underlying fact stands that copying content that doesn’t belong to you should be considered criminal, its exceptionally unlikely any one person will end up being prosecuted for violating the digital licensees enforced by monopolization in cinema hagiarchy. As an ending note, the legality of this subject is complex yet slow to change in favor of consumers. So myself and others interested in digitalization for personal use, have to remain in subject criminality. In the meantime as we wait for the government to catch up with faster media and laws governing copy protection, watching the movies in which I purchased, in the modern way that I choose, will stay illegal.

Featured

A briefing on VPN’s through NAT

When I originally created a VPN, I used the rudimentary port forward method that plenty of online guides hesitantly endorse. However, I found the underlaying security malpractices, and various compromises this method foresees, as complete oversight and continued using my flawed virtual network for encryption on the internet. One of my two uses cases are fulfilled with this deployment, however the other use case is that of localization into my network. If I want to access my media server from outside the network, I need layer 3 capability to roam as a local network user. With my custom build of openVPN running on archlinux, I ran into constant errors with accessibility of network locations so through some basic diagnostics, again and again I could establish a connection, and a secure one at that, but never could pull data off a sambashare or a FTP server respectively.

I read a very detailed and elaborate article about VPN passthrough and NAT, and came to some conclusions about my network situation that could be causing the root of my issues. My multi-router setup allots certain service on dedicated subnets. This is for security and for bandwidth allocation by dividing the throughput of packets being bounced around my network. I found that the source of my problems stems through the specific series of routers the VPN server was on the complete end of physically. This presented me with options of how to configure the port, or series of ports that are opened to face the internet. Ideally, no ports should be physically opened but rather dynamically triggered upon request, however for simplicity among a multi-translated network such as my own, A port can be triggered dynamically on the router closest to the server with an endpoint of the main router via opened ports. With the main downside of predictability in the security of the network, custom ports can be used to disallow hackers with scanning software to access my network. Another downside of this method is the lack of redundancy. If one router is taken offline then the series of opened ports will cease to function and will pose a security risk for the rest of the network. This made me choose the final deployment method with applicable layer 3. If only the router closest to the server is triggered to open the port, then with the upside of security, the VPN will function as needed to serve that subnet only. This method works well enough with openVPN but switching the server to run Wireguard configurations is not just a worthwhile upgrade but a necessity for transferring files at the desired speeds. I plan to keep this server updated and maintained until I require an upgrade. When the time comes to upgrade I will update this blog entry.

Minecraft server update 6/19/21

1.17 is just around the corner and in anticipation of the new world features the pre-generated world was reset in favor of a non pre-generated world. In the past I saved ram usage by pre-generating the world, making it easier on the server to load chunk data. This has the apparent disadvantage of not being useful to the 1.17 update, as the new caves will be rendered as chunks and are generated outside and around spawn. The world has been reset, and about a week or two earlier than anticipated due to massive chunk errors due to an error in the custom world generation. Because blocks have custom physics, floating islands cause lag and other unwanted behavior, such as a large amount of entities around the islands due to the huge shadow they would cast. Another problem arose as the new java requirement that made many plugins inoperable, or caused unexplainable behavior during testing.

Massive floating islands caused by the world generation config being reset from a failed 1.17 update.
High CPU/RAM usage when no players online has been addressed. Would sometimes cause the server to fail to respond and thus would never restart, and the cycle would continue.

The server has been upgraded to 24 GB of ram as I expect chunk loading to take a bigger hit on server performance due to the world no longer being pre-generated. I expect CPU usage to raise nearly 50% with this update, so lots of measures need to be put in place to prevent the server from overloading, which as of lately is very common with 1.17 and legacy plugins. The update has been postponed until at least paperMC recives the 1.17 update, and not the early test build.

It is best to wait as to be certain the server is stable.

Hopefully purpur receives an update soon after so the server can continue using purpurs advanced features. Plugins have been fast to update, others have been slow, so I’ve already begun working on updates of the servers custom plugins, and now must wait for the community to update to the new version. The server is still waiting on essentials, animated tablist, and the dozens of legacy plugins the server uses. Many optimizations have been made, or are in the process of being made. such as datapacks and their inherit instability due to updates, or the new requirement of Java 16. Lots of testing will be done on a separate server so make the update as smooth as is possible. So far, nothing but chaos has been seen from 1.17, so my expectations are low an update can be made in the next week, but I will continue applying stable updates as needed leading up to 1.17, and optimizing the server for use on all platforms. Cracked accounts have never been allowed as they can bypass name restrictions that are put in place my Mojang, and I’m fully aware of the bypasses cracked users can use to join premium servers and doing so is extremely discouraged but can be done as long as a unique username is used. That being said, bedrock and java is supported and piracy of the game is not at all encouraged to play on the server. Let me know if there is anything I can assist with.

Minecraft server update 5/17/21

I’ve been made aware that the built in Minecraft whitelist protocol is failing to add the UUID of bedrock players into my Minecraft java server. I noticed how the /whitelist command only searched for player UUID’s from Mojangs account servers, and have amended the command for use with in game operators to fix this bug. Minecraft bedrock players no longer need to join with the whitelist disabled as they should be able to be added just like java players would. All bedrock players will have a “*” in front of their name, this represents that some java only features will be missing. I’ve modified server anti-cheat and flight values as to not false positive bedrock players when the movement exceeds the tick rate of the server. The goal is a solid 20, but that isn’t always achievable, even on my modern hardware, so please allow some tick rate fluctuation as well as ease of life features to be disabled as we allow more and more bedrock players access to the world. One notable feature that has been disabled is the planting of tree saplings that previously caused unusual tick rate fluctuation. Please note that Craftory tech items will make little sense to bedrock players as they cant see invisible armor stands, or custom furnaces, however the GUI on these items will always still work outside of the recipe book: “/cr recipebook”. If you have any questions let me know at jake@serverboi.org, or fill out a whitelist application to join in on the survival server.

Introduction post

This is an introduction post,

After much consideration, Ive created this wordpress site to commemorate and showoff my computer projects and questions I run into while devolving them.

Why do this?

  • I plan to update this site periodically, as a personal archive and portfolio to look back upon in the future.
  • My logging and explaining my ventures allow me to better understand what I’m doing, and if I make mistakes, I can log them.
  • To create a medium to express my thoughts on computers and technology.

Why am I blogging publicly?

  • To share like views, with community members while creating connections and sharing professional opinions in a more interactive way.
  • To create an archive of topics and subjects that I either am working on, or plan to work on in some form.
Create your website with WordPress.com
Get started