Ik ben de laatste paar jaar fanatiek aan het hardlopen en mijn hardloopschoenen zijn 1 van de meest belangrijke onderdelen hierin. In de jaren heb ik redelijk wat merken afgesleten maar met de schoenen van ON ben ik toch het meest tevreden. Dit niet zonder slag of stoot, dus daarom even een uitgebreide review.
Het eerste model van de ON collectie was voor mij de Cloudace:
Deze schoen is perfect voor lange afstanden zoals 10km, halve marathon, ultra marathon, zware lopers, herstel loopjes. Dit allemaal vooral op de weg. Gewicht van deze schoen is 335 gram.
Dit was voor mij een zeer fijne schoen! Ik ben een hardloper met een zwakke knie en deze schoen was voor mij de eerste schoen die geen knie problemen gaf.
Ik was zo blij met deze schoenen dat ik er meer dan 1200 km mee gelopen heb en toen gingen ze stuk. De zijkant van de schoenen scheurde, zie foto
Ik moet zeggen dat deze schoenen niet overal verkrijgbaar zijn, maar ik heb ze geadviseerd gekregen bij Runnersworld in Lelystad. Door hun ben ik op deze schoenen gaan lopen. Hier ben ik tot vandaag de dag erg blij mee. Tegenwoordig heeft deze toko de naam Budget Running Lelystad. Fijne mensen en geven eerlijk advies en goede service. Zeker een aanbeveling om een keertje langs te gaan voor nieuwe schoenen of andere hardloop spulletjes.
Maar terug naar mijn review van de schoenen. De nieuwe schoen die ik daarna aanbevolen kreeg was de ON Cloudflyer.
Mijn eerste ON schoenen (Cloudace) waren voor de lange afstanden en omdat ik mij ging richten op 5 tot 8 km trainingen en wedstrijden, werd de Cloudflyer aanbevolen. Dit is een veel lichtere schoen en loopt ook geweldig. Nog steeds blessurevrij maar er kwam een probleem na 3 maanden…. Na een training merkte ik dat de zijkanten waren opengescheurd (net als de Cloudace). Kon er geen foto meer van vinden maar ik heb van Runnersworld nieuwe schoenen gekregen. Jammer genoeg scheurden die ook na 3 maanden kapot. Ik heb een iets bredere voet en dit zou het probleem kunnen zijn. ON melde geen verdere meldingen van hetzelfde probleem bij andere gebruikers…. dus zou wel aan mij liggen. Ook Runnersworld heeft me toen een andere schoen aanbevolen, namelijk de ON Cloudstratus.
Je kan de kerstboom nog op de achtergrond zien en zo voelde het ook voor mij. Een echt kerstcadeautje! Deze schoen is 1 van de enige bredere uitvoeringen bij ON. Ik ben zo tevreden over deze schoenen dat ik al mijn 3e paar zit. Nooit problemen mee gehad zolang ik ze maar netjes vervang na ongeveer 500 km. Geen blessures, geen defecten in het materiaal.
Omdat de Cloudstratus best een zware schoen is (305 gram), heb ik de ON Cloudswift ook een keer geprobeerd. Dit is bijna dezelfde schoen als de Cloudstratus maar is niet in een brede uitvoering te krijgen. Na een paar trainingen kreeg ik weer wat problemen met de knie en ben direct gestopt met deze schoen. Deze draag ik nu in mijn vrije tijd.
Ook heeft ON wedstrijdschoenen. Dit zijn schoenen die zo licht mogelijk zijn en (bijna) geen demping geven. Niet echt een schoen om dagelijks op te trainen, maar zeker top schoenen voor wedstrijden. Ik heb zelf gekozen voor de ON Cloudflash. 210 gram en perfect voor 5km – 10km wedstrijden op de weg.
Hopelijk heeft deze review jou iets meer informatie gegeven over ON hardloopschoenen. Wil jij nu ook een keer ON proberen? Ga dan naar de website en kies voor de ‘shoe finder’, hier de rechtstreekse link: https://www.on-running.com/en-nl/shoe-finder Hier kun je aangeven of je de schoenen gebruikt voor dagelijks gebruik of voor hardlopen of voor wandelen. Na elke keuze krijg je weer een nieuwe keuze. Bijvoorbeeld als je hier nu kiest voor hardlopen, dan word daarna gevraagd of je schoenen zoekt voor op de weg of op de baan of in de bergen. Zo komen er nog een aantal vragen en elke keer verandert jou overzicht. Zie hieronder een voorbeeld.
Tegenwoordig weet ik welke maat ik heb en welke schoen ik wil. Ik bestel dus rechtstreeks bij ON. Nieuwe schoenen ontvang ik ongeveer al na 2 dagen, echt super snel voor schoenen uit het buitenland. Levering is gratis en word gedaan met UPS. Bij de schoenen zit een label voor eventueel terugsturen. Echt super geregeld. Dus wil je ook ON schoenen, ga dan naar https://www.on-running.com/
If you want to surf easily through the internet, you have installed an AdBlocker for sure! If not, this is a free tip ๐ You can install an AdBlocker as an add-on in your browser. But unfortunately advertisement on webpages is not the only advertisement we are seeing. No, we see it in YouTube, Google, Facebook, in our Apps on our phones and even sometimes on our tv’s. The crazy thing is that my kids are installing a lot of free games on their phones and are getting a lot of advertisements. Sometimes not even for kids… Next to that I want to block some website, in case they are mistakenly going to wrong places. Luckily there is a solution for that and it is FREE!! It is called Pi-Hole which is a DNS advertisement blocker and in this blog I am going to explain how I installed it on my network.
In my Proxmox environment I created a new container with Ubuntu 16.04. I configure 4GB of storage, 1 CPU, 1GB Memory and I configure a static IP address. This must be a static IP address because it is going to be the future DNS of all my devices.
After creating the Ubuntu container, login via SSH # ssh root@ipaddress
NOTE: if you cannot login via SSH because of error “Permission denied, please try again.”, then go to the CLI on the Proxmox server and edit the file /etc/ssh/sshd_config # nano /etc/ssh/sshd_config Change option “PermitRootLogin” to “yes” Restart SSH service # systemctl restart ssh
Setup your timezone # sudo dpkg-reconfigure tzdata In my case, I choose Europe – Amsterdam # sudo apt install ntp -y # service ntp status # date Check if the date/time is correct
Install Pi-Hole # apt install curl # curl -sSL https://install.pi-hole.net | bash Follow the installation wizard, I only mention the one’s which I change Select Upstream DNS Provider – Google
After the installation is completed, you receive an overview of the settings and password. You can go to the Pi-Hole website # https://ipaddress/admin
Change password to something you can remember ๐ # sudo pihole -a -p
Last step is to add this IP address as your primaire DNS in your router.
Have fun on the internet without crazy advertisements ๐
Update: If you don’t see any activity after some time, update your grafity. Go to your Dashboard Login Tools Update Grafity Update
Voor de mensen die mij kennen is het niet nieuw dat ik graag naar podcasts luister. Dit doe ik vooral tijdens mijn hardloop sessies die ik tijdens Corona alleen uitvoer. Podcasts luister ik via Spotify en normaliter luister ik vooral technische podcasts, tot ik een tip kreeg om ‘De brand in het landhuis’ te gaan beluisteren. Hieronder geef ik een persoonlijke review zonder te veel over de podcasts te vertellen.
De brand in het landhuis van NPO Radio 1 / NTR Deze tip zag ik op Instagram voorbij komen in 1 van de storylines van mijn vrienden. Zonder enig onderzoek zette ik deze podcasts op en tijdens mijn hardloop sessie kon ik gelijk 2 afleveringen beluisteren. De rustige stem en de vriendelijkheid van de verteller Simon Heijmans trekt mij in het verhaal. Als ik thuis kom vertel ik er gelijk mijn vrouw over hoe goed het allemaal in elkaar zit. De juiste achtergrondmuziek en achtergrondgeluiden maken alsof je er bij bent. Het lijkt wel echt gebeurt te zijn, zeg ik nog tegen mijn vrouw. Maar de wendingen die ik al gehoord heb, geven mij het gevoel alsof het een goed boek is. Als ik later nog een aflevering heb gehoord, ga ik toch maar even op internet zoeken of het echt is gebeurd. En ja, het is geen fictie. Dit maakt de podcast nog spannender. Ik moet eerlijk bekennen die week iets meer km’s te hebben gelopen dan normaal om zo de podcast nog sneller af te kunnen luisteren. Deze podcast staat bij mij tot nu toe (2021-03) zeker op nummer 1! Zie deze website voor meer informatie en om deze podcast gratis te beluisteren: https://www.nporadio1.nl/podcasts/de-brand-in-het-landhuis
De Kasteelmoord van NPO Radio 1 / VPRO Na de brand in het landhuis wilde ik nog meer van deze soort true crime podcasts horen en via een andere vriend kreeg ik De Kasteelmoord en El Tarangu als tip door. Hups, hardloopschoenen weer aan, oordopjes in en luisteren maar. Als je van true crime houd is dit zeker ook een aanrader, maar ik moet zeggen dat hij niet tegen de brand in het landhuis op kan. Je word weer meegezogen in een onderzoek naar de kasteelmoord en de gevoelens en emoties worden weer getest. Vond dit een goede podcast maar geen topper. Zie deze website voor meer informatie en om deze podcast gratis te beluisteren: https://www.vpro.nl/programmas/de-kasteelmoord.html
El Tarangu van NPO Radio 1 / VPRO Ik hou van sport, tja hardloper he…. en deze gaat gedeeltelijk over sport maar moet er een groot vraagteken opgelost worden. 3 gezellige dames gaan op onderzoek uit en komen tot verrassende uitkomsten. Zeker een leuke podcast voor sporters en mensen die van het oplossen van vraagtekens houden. Heb me zeker niet verveeld tijdens deze podcast. Zie deze website voor meer informatie en om deze podcast gratis te beluisteren: https://www.vpro.nl/programmas/El-Tarangu.html
Again busy at a great project and …. the recorded sound is mono and totally out of sync. At this point I only hear sound on the left side (I always work with a headset on my head) and the sound is about 0.8sec behind. I found an easy way to solve this.
The first thing you need to do is to detach the audio from the video.
Now you have 2 clips, 1 video and 1 audio which you can edit separately
You can move easily your audio to the desired place to get your audio in sync again
To change mono to stereo, click on the audio track. Go to the Audio Configuration (top right of your screen)
Change Stereo to Dual Mono (see above). Now your Mono stream also plays on your right side. Problem solved ๐
I was busy with a very long video in Final Cut Pro which was a combination of all short movies. But… as normal, the audio was not everywhere that good… Luckily Final Cut Pro has some great features available. One of those is ‘Auto Enhance Audio’ which can be found in Menu > Modify > Auto Enhance Audio
The first thing you need to do is to select the clip you want to enhance. Then you click on Auto Enhance Audio (or press Option-Command-A). The audio will be analysed and enhanced. When finished you see a green check mark at the audio stream.
I bought an iMac with a Magic Mouse. Unfortunately I can not work with that mouse… so I connected a ‘normal’ mouse via USB. But strangely this mouse tracking was very very very slow. So, I went to Apple menu > System Preferences, then click on Mouse and changed the speed to the max speed. But stil… I need to move my mouse twice to go from the left to the right side of the screen. Very frustrating, I need more speed!!! So, what are we techies doing? Yes, I Googled my solution ๐ I found a bunch of apps who could help me solve the problem, but the apps also installed a lot more features which I do not need. So, that is a no-go for me…. also most of the apps are not free and if I just want to change a basic function on a Mac, I want to do this for free. Luckily I found the solution which could be done via my Terminal. Let me explain.
Step 1 Open the basic terminal of your Mac
Step 2 Check you current speed by command # defaults read -g com.apple.mouse.scaling In my case it came back with 3.0
Step 3 Change your current speed by command # defaults write -g com.apple.mouse.scaling the_speed_you_want You can go to a max of 7.0 and after some testing, 6.0 was my sweat spot, so I used this # defaults write -g com.apple.mouse.scaling 6.0
Step 4 Reboot Unfortunately you have to reboot to activate the new settings, so you probably have to reboot a couple of times…
Okay, there are many ways to make your home smart. But what is the best way? What is the most secured way? Great questions. For now I do not have the answer for that. I already had a led dimmer from EcoDim (https://www.ecodim.nl/eco-dim07-zigbee-pro.html). This one supported the Zigbee protocol, so I am going to use that. This is also be used by the Philips Hue and Ikea Hub. Of course I am a IT guy, so I am not going to buy a gateway/hub, but I am going to build one ๐
For this build, I am using a Raspberry Pi 3 with a ConBee II (Zigbee USB-Gateway) and I am going to install Domoticz (https://www.domoticz.com/) on it.
First step is to go to the ConBee website and download the installation image. They already have start ready images for you. I choose the Raspbian Buster Headless (Beta image without desktop based on Debian Buster). To write this image to the SD card, you need to download the Etcher program. Install and start it.
1. Insert the SD-card into a SD card reader 2. Press Select image and select the downloaded SD-card image (Phoscon_Gateway_Headless_2020-10-16.xz) 3. Press Select drive and select the SD card drive 4. Press Flash! to start the copy process
Place the SD card into the Raspberry Pi, add network and add the USB ConBee 2. Add power to boot. SSH is by default enabled. Find the IP of the Raspberry Pi by checking your DHCP list on your router or by adding a monitor to it and run # ip a
Login credentials are: Username: pi Password: raspbeegw
Change basic configurations by running command with root permissions: # sudo raspi-config
Change the following settings (feel free to change more): 1. Change User Password 2. Network Options – Hostname 4. Localisation Options – Time Zone 7. Advanced Options – Expand Filesystem 8 Update Some of the settings need a reboot to get activated, run command # sudo reboot
On the Pi or on a different computer in the same network, go to http://phoscon.de/app Click on the icon in the top (some kind of “P”) to start the search to your Phoscon gateway. When it is found, it will be displayed below the search bar. See printscreen below.
When the gateway is found (in my case the Phoscon-GW), click on it. Set a Gateway name and a Login password and click on Next.
In the next step you can connect lights, but I don’t have installed one yet, so I will continue with the button in the top right Proceed without lights ->
In the next step you can create your first group by clicking on Create first group, give it a name (in my case Keuken) and click on Create
Go back to Raspberry Pi (via SSH) to download and install Domoticz. Run command:
Now we are going to install Domoticz. Start the installation by running command # curl -L https://install.domoticz.com | bash This installation will start a wizard.
Select Services: Both HTTP and HTTPS HTTP Port number: 8080 HTTPS Port number: 443 Installation folder: /home/pi/domoticz After the installation is complete, you can find the installation log in /etc/domoticz
During my plugin installation (explained later), I had some issues finding it. Seems that we also need to install some python libraries. Run command # apt-get install python3 libpython3-dev libpython3.4-dev After install, restart Domoticz # sudo service domoticz restart
Now we are installing the deCONZ plugin to Domoticz. Go to the plugin folder # cd /home/pi/domoticz/plugins Download the plugin # git clone https://github.com/Smanar/Domoticz-deCONZ.git Be sure the plugin has the right permissions # chmod +x Domoticz-deCONZ/plugin.py
Connection between Domoticz and DeCONZ is going via API. So, first we need to generate a key. First go into the plugin folder # cd Domoticz-deCONZ Generate the key # python3 API_KEY.py 127.0.0.1 create The response will be something like this: Your new API key is : 688C0296EC
Now we are going to add this key to Domoticz, but first we need to restart Domoticz # sudo service domoticz restart
After the restart (10 seconds), visit your Domoticz website. http://ip_of_pi:8080 or https://ip_of_pi:443
In Domoticz, go to Setup – Settings
Go to Hardware/Devices (at the bottom of the page) and be sure Accept new Hardware Devices is ON (green)
Go to Setup – Hardware Add the DeConz Plugin Add a Name and change the API KEY to the key you just generate.
After clicking on Add, you can see in the top the just created Device
All devices which are already created are added to Domoticz!
You are now ready to add more devices to Domoticz and to personalise it special for yourself ๐
RedHat OpenShift is the new platform to be! If you want to be ready for the future, you have to be on Red Hat OpenShift Container Platform. But how can you play with OpenShift? First thing I find on the web is Minishift or OKD. But both are only updated to (OpenShift) 3.11 and at the time of writing, OpenShift 4.6 is released. And of course we want to play with the latest versions ๐ Luckily Red Hat has given us more options.
To try RedHat OpenShift, you can go to openshift.com/try. Here you find 4 options. 1. On your computer 2. In your datacenter 3. In your public cloud 4. As-a-Service
The beste choice for me is 1 because a full installation of OpenShift requires a lot of resources which I do not have. I just want a minimal installation to test and play.
When you choose for On your computer, you have to login with your Red Hat account. If you don’t have one, you can create it directly. On the Red Hat website you can download CodeReady Containers (CRC), which can bring a minimal OpenShift cluster up and running on your local laptop or pc. CRC OpenShift cluster is a single node which behaves as both a master and worker node. All machine-config and monitoring Operators are disabled. That why it is to play and test only.
How to install? The hardware requirements for CRC are – 4 vCPUs – 9GB Memory – 35GB Harddisk As OS, you can choose for minimum Windows 10, MacOS Sierra or Linux RHEL\CentOS 7.5 or Fedora (last 2 versions). In my case, I install it on my virtual server (running on Proxmox) where I choose CentOS 8.2. I installed CentOS with 4 vCPUs, 10GB Memory and 50GB of Harddisk (bye bye resources).
After CentOS is installed, you also need to install the Network Manager: # su -c 'yum install NetworkManager' In my case, this was already installed during the basic setup.
Next is to download and install CRC. The link can be found on the RedHat website. Choose Linux and check the link. Download the files to the Linux server. Please notice that the file is about 2.5GB big. Depending on your internet speed, this can take a while. # wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz
Unpack the downloaded file: # tar -xf crc-linux-amd64.tar.xz
Move the CRC file to a path directory or add the directory to the path file. I choose for the move. Check your path directories by running command # echo $PATH
I move the file to /usr/local/bin # sudo cp crc /usr/local/bin/
Check if the installation was successful by checking the installed version # crc version CodeReady Containers version: 1.17.0+99f5c87 OpenShift version: 4.5.14 (embedded in binary)
Start the setup of CRC This procedure will create the ~/.crc directory if it does not already exist. # crc setup INFO Checking if oc binary is cached INFO Caching oc binary INFO Checking if podman remote binary is cached INFO Checking if goodhosts binary is cached INFO Caching goodhosts binary INFO Will use root access: change ownership of /home/bjbaarssen/.crc/bin/goodhosts INFO Will use root access: set suid for /home/bjbaarssen/.crc/bin/goodhosts INFO Checking if CRC bundle is cached in ‘$HOME/.crc’ INFO Unpacking bundle from the CRC binary INFO Checking minimum RAM requirements INFO Checking if running as non-root INFO Checking if Virtualization is enabled INFO Setting up virtualization You need to enable virtualization in BIOS
By default, the settings in Proxmox for nested virtualization are disabled. Normally you are not virtualizing within your virtualization layer because it is much slower. At this point, I have not other solution, so I have to change my Proxmox settings.
Change Proxmox settings: Source: https://pve.proxmox.com/wiki/Nested_Virtualization Login into your Proxmox shell. Check your virtualization settings: # cat /sys/module/kvm_intel/parameters/nested output is N
Change the virtualization settings to Y # echo “options kvm-intel nested=Y” > /etc/modprobe.d/kvm-intel.conf
This setting will be activated after your restart of the service. Because lots of other virtual servers are running on Proxmox, I have to shutdown all my servers. The service can not be restarted when it is in use.
After you shutdown all the servers, restart the service: # modprobe -r kvm_intel # modprobe kvm_intel
Check if the new setting is active: # cat /sys/module/kvm_intel/parameters/nested output is Y
Also for the CentOS, the hosts settings in Proxmox need to be changed. Go to the CentOS settings, Hardware – Processors (or CPU) and change to TYPE to HOST. See printscreen:
Start VM(s)
Start the setup of CRC (second try) # crc setup INFO Checking if oc binary is cached INFO Checking if podman remote binary is cached INFO Checking if goodhosts binary is cached INFO Checking if CRC bundle is cached in ‘$HOME/.crc’ INFO Checking minimum RAM requirements INFO Checking if running as non-root INFO Checking if Virtualization is enabled INFO Checking if KVM is enabled INFO Checking if libvirt is installed INFO Installing libvirt service and dependencies INFO Will use root access: install virtualization related packages [sudo] wachtwoord voor bjbaarssen: INFO Checking if user is part of libvirt group INFO Adding user to libvirt group INFO Will use root access: add user to libvirt group INFO Checking if libvirt daemon is running INFO Checking if a supported libvirt version is installed INFO Checking if crc-driver-libvirt is installed INFO Installing crc-driver-libvirt INFO Checking for obsolete crc-driver-libvirt INFO Checking if libvirt ‘crc’ network is available INFO Setting up libvirt ‘crc’ network INFO Checking if libvirt ‘crc’ network is active INFO Starting libvirt ‘crc’ network INFO Checking if NetworkManager is installed INFO Checking if NetworkManager service is running INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists INFO Writing Network Manager config for crc INFO Will use root access: write NetworkManager config in /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf INFO Will use root access: Changing permission for /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf to 420 INFO Will use root access: executing systemctl daemon-reload command INFO Will use root access: executing systemctl reload NetworkManager INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists INFO Writing dnsmasq config for crc INFO Will use root access: write dnsmasq configuration in /etc/NetworkManager/dnsmasq.d/crc.conf INFO Will use root access: Changing permission for /etc/NetworkManager/dnsmasq.d/crc.conf to 420 INFO Will use root access: executing systemctl daemon-reload command INFO Will use root access: executing systemctl reload NetworkManager Setup is complete, you can now run ‘crc start’ to start the OpenShift cluster
Start the OpenShift cluster # crc start INFO Checking if oc binary is cached INFO Checking if podman remote binary is cached INFO Checking if goodhosts binary is cached INFO Checking minimum RAM requirements INFO Checking if running as non-root INFO Checking if Virtualization is enabled INFO Checking if KVM is enabled INFO Checking if libvirt is installed INFO Checking if user is part of libvirt group INFO Checking if libvirt daemon is running INFO Checking if a supported libvirt version is installed INFO Checking if crc-driver-libvirt is installed INFO Checking if libvirt ‘crc’ network is available INFO Checking if libvirt ‘crc’ network is active INFO Checking if NetworkManager is installed INFO Checking if NetworkManager service is running INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists ? Image pull secret [? for help]
Copy the pull secret from your Red Hat account (see printscreen) and paste on the CLI and press Enter
INFO Extracting bundle: crc_libvirt_4.5.14.crcbundle … crc.qcow2: 9.97 GiB [—] 100.00% INFO Checking size of the disk image /home/bjbaarssen/.crc/cache/crc_libvirt_4.5.14/… INFO Creating CodeReady Containers VM for OpenShift 4.5.14… INFO CodeReady Containers VM is running INFO Generating new SSH Key pair … INFO Copying kubeconfig file to instance dir … INFO Starting network time synchronization in CodeReady Containers VM INFO Verifying validity of the cluster certificates … INFO Check internal and public DNS query … INFO Check DNS query from host … INFO Starting OpenShift kubelet service INFO Configuring cluster for first start INFO Adding user’s pull secret … INFO Updating cluster ID … INFO Starting OpenShift cluster INFO Updating kubeconfig WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation Started the OpenShift cluster. To access the cluster, first set up your environment by following ‘crc oc-env’ instructions.Then you can access it by running ‘oc login -u developer -p developer https://api.crc.testing:6443’. To login as an admin, run ‘oc login -u kubeadmin -p dpDFV-xamBW-kKAk3-Fi6Lg https://api.crc.testing:6443’.*** To access the cluster, first set up your environment by following ‘crc oc-env’ instructions.
Check the status of your CRC Cluster # crc status CRC VM: Running OpenShift: Running (v4.5.14) Disk Usage: 13.8GB of 32.72GB (Inside the CRC VM) Cache Usage: 13.04GB Cache Directory: /home/bjbaarssen/.crc/cache
To open the OpenShift webconsole, run the command # crc console This will open your default webbrowser and start the OpenShift Web Console. If something is going wrong, you get this error: Opening the OpenShift Web Console in the default browserโฆ[bjbaarssen@192 ~]$ Error: no DISPLAY environment variable specified
You can also request the URL also by # crc console --url https://console-openshift-console.apps-crc.testing
If you forgot the login credentials, you can run # crc console --credentials To login as a regular user, run ‘oc login -u developer -p developer https://api.crc.testing:6443’. To login as an admin, run ‘oc login -u kubeadmin -p dpDFV-xamBW-kKAk3-Fi6Lg https://api.crc.testing:6443’
When you successfully run the command and logged into the Web Console, you can start with your first project. See printscreen.
From this point we can open the OpenShift Web Console only on the Virtual Linux machine, but I want to open it on all my machines in my network. Let’s see how we can do that.
Setting up CodeReady Containers on a remote server
Be sure the cluster is running, check with # crc status if down/stopped, run # crc start
Install the haproxy package and other utilities # sudo dnf install haproxy policycoreutils-python-utils jq
Modify the firewall to allow communication with the cluster: # sudo systemctl start firewalld # sudo firewall-cmd --add-port=80/tcp --permanent # sudo firewall-cmd --add-port=6443/tcp --permanent # sudo firewall-cmd --add-port=443/tcp --permanent # sudo systemctl restart firewalld
For SELinux, allow listening to TCP port 6443 # sudo semanage port -a -t http_port_t -p tcp 6443
Create a backup of the default haproxy configuration in case you messed it up: # sudo cp /etc/haproxy/haproxy.cfg{,.bak}
Configure haproxy for use with the cluster: # export CRC_IP=$(crc ip) # sudo nano /etc/haproxy/haproxy.cfg Add the following to the cfg file: global debug
defaults log global mode http timeout connect 5000 timeout client 5000 timeout server 5000
frontend api bind 0.0.0.0:6443 option tcplog mode tcp default_backend api
backend api mode tcp balance roundrobin option ssl-hello-chk server webserver1 $CRC_IP:6443 check
Start the haproxy service: # sudo systemctl start haproxy
To open the OpenShift Web Console on your local clients in your network, add the following to their local hosts file. In my case, 192.168.1.64 is the ip address of the server where CRC is running. # sudo nano /etc/hosts Add the following to the hosts file 192.168.1.64 api.crc.testing oauth-openshift.apps-crc.testing console-openshift-console.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing
You probably don’t see any new blog online soon, because I am playing in Red Hat OpenShift Container Platform ๐
Because this will eat a lot of your resources, you can easily temporally shutdown your CRC cluster and start it when you need it. See below the most common commands. # crc stop # crc status # crc start # crc console
In my previous blog you can read about the server I build. This blog will describe the OS layer I will install on this build. I choose for Proxmox Virtual Environment or in short, Proxmox VE. Proxmox VE is a complete open-source platform for enterprise virtualization. With the built-in web interface you can easily manage VMs and containers, software-defined storage and networking, high-availability clustering, and multiple out-of-the-box tools on a single solution.
I have to say that Proxmox is one of the better website which describe how to install and use their products. When you enter the get-started section of there website, there are 3 steps defined. See the full installation guide for more information.
Step 1: Download ISO image An ISO image file is an image of a disk. Download the Proxmox VE ISO, then copy it to a USB flash drive or CD/DVD in order to use it. Copy an ISO to my USB stick is not an everyday taks for me, so I used the steps from the full installation guide which was very detailed. Below I describe how you can do it on a MacOS, but Linux and Windows are also available. More info here. I downloaded version 6.2-1 when I write this blog.
First we need to convert the .iso file to .img using the convert option of hdiutil. Go to the directory where you downloaded the ISO and run this command: # hdiutil convert -format UDRW -o proxmox-ve_6.2-1.dmg proxmox-ve_6.2-1.iso
The new dmg file is about 900MB, but is probably compressed and will be uncompressed when we write it to the USB drive. My advise is to use an USB drive which is bigger than 1GB.
To get the current list of devices run the command (see printscreen below) # diskutil list
Write down the USB drive, in my case it is /dev/disk3 The USB needs to be unmounted before you can write an image to it. Unmount the disk with command # diskutil unmountDisk /dev/disk3
Write the image to the USB with command # sudo dd if=proxmox-ve_6.2-1.dmg of=/dev/rdisk3 bs=1m * We use rdiskX instead of diskX because it will increase the write speed. When this is finished, the USB is ready to use. See below the output from the 3 commands
Step 2: Boot from USB Connect the USB drive to the server and make sure that booting from USB is enabled (check your servers BIOS settings). In my case I have to press F11 after I power on my server. Then follow the steps in the installation wizard. Full description can be found here.
Proxmox VE is based on Debian. This is why the install disk image include a complete Debian system (Debian 10 Buster for Proxmox VE version 6.x) as well as all necessary Proxmox VE packages.
When you booted successful from the USB drive, you get a welcome screen. Choose Install Proxmox VE
Boot image starts and End User License Agreement (EULA) is displayed. Click on I agree
Choose your target harddisk, I choose my SSD /dev/sda (465GB) There is also a button Options where you can change your Filesystem and HDD size. At this moment I leave everything to default. Click on Next
At Location and Time Zone selection, I choose for Country: Netherlands The Time zone: Europe/Amsterdam is automatically adjusted. Choose your Keyboard Layout: U.S. English Click on Next
Create your Administration Password and E-Mail Address and click on Next. This password will be connected to the root user.
Choose your Management Interface. I have only 1 network port, so I have only 1 option. Choose your hostname, IP Address, Netmast, Gateway and DNS Server. There is no automatically (or DHCP) option available. If you are already connected to the network and you have a DHCP configured, everything is already pre-filled for you with a DHCP address. Adjust what you want and click on Next
On the next page you see a summary of your settings you just filled in. Click on Previous if you want to change anything or Click on Install to start the installation process.
When the installation process is finished, you received a message Installation Successful! Click on Reboot and remove the USB Stick you used for the installation.
On another computer/laptop, go to https://Proxmox_IP_address:8006 If you forget https:// there is a possibility that you receive an error.
Step 3: Configure via the GUI You can do everything from the website interfase. Just go to the website and login with root and the password you created during the installation process.
Please notice, you receive a No valid subscription because you use the free version without support. If you want support on your server, please go to this address.
After you clicked on OK, you can access your Proxmox Virtual Environment.
It is very easy to create a VM or CT from this point. I will write a blog about that later.
You can buy a server, or you can build one yourself. I choose the latest option and will write my findings here. Before you are going to just pick parts, please think for yourself what you want. I do not need a big server, just some virtual machines for NAS, Testing, AI and other projects. Therefor I do not need the latest, best and fastest technologies. I like to… but it will cost me more. Because it will be a small server, I will also try to build a small server.
On Tweakers.net (great Dutch technology website) I found a CPU, the i5-6600k. 4 Cores, 3.50 GHz, 6 MB Cache (more info). Great one for my server. A good friend of mine still had an old Motherboard, the ASRock H270M-ITX/ac. The most important of this MB are the 2 network connections. And what I really like is it small vorm. It is a mini-ITX board, which means all the basic parts are on the board (sound, graphics) and will use less energy.
Max memory for this MB was 32GB, so I ordered 2 x 16GB Team Group Elite TED416G2400C1601 as advised by ASRock. You can find this kind of information on there website.
Also storage is very important. You can build a fast server, but if your storage is slow, everything will be slow. It will be a bottleneck for your server. Because this MB had a M.2 connection, I was able to get the best storage available for this server. I choose a NVMe SSD, the WD Black NVMe SSD SN750 500GB (with heatsink). Read 3.470MB/s, Write 2.600MB/s. Whoop Whoop!
Because I want to keep this server small, I tried to find a solid, but small case. Keep in mind that a server gets warm, so cooling is very important. Therefor I did some research because I do not build small servers very often. On YouTube I found some really nice build and some of them used the Inter-Tech ITX A60 ITX-Tower Black. This is a nice small ITX case which can also fit 2 extra harddrives. Because it is a small case, the power will be driven by a picoPSU. In short, this is an external 60 Watts power for small PC designs using a single 12V power source.
Also mentioned before, you need cooling! Good cooling! But you don’t want to hear it (if possible). And don’t forget that it should fit in a mini-ITX case. So, small and quiet. I found the Noctua NH-L9i Processor Cooler.
Fun fun fun ๐ it all fits
After some time I finished the build. But I learned from the past, so before I put everything in the case, I will test first. I plugin the power and everything starts running, spinning. I am happy so far, and continue building the parts into the case.
It fits ๐
Lets test again before I clear all my wires and close it. When power on, it doesn’t react any more… The fan just move a little bit, but thats it…
I get it out of the case and start testing. – Remove 1 DIMM / Switch slots – Remove 2 DIMMs – Re-attached CPU/Cooler – Remove SSD – Attached other PicoPSU – Attached other Power Supply – Attached GPU card Nothing works, the motherboard did not boot any more. After a lot of testing, Googling, asking friends, we know for sure that this MB is history…. ๐
But we don’t give up, so I went back to Tweakers.net and asked if anybody has the same MB for me. Within some hours, I received a message for someone who didn’t had the same, but a similar. It was the ASRock H110M-ITX/ac. It was not used, so it was a new board. Everything fits on it, except the NVMe SSD. I switch this NVMe SSD with a SATA SSD. It is not that fast, but fastest what I can get in this build. I choos the SSD Crucial MX500. This is a 500GB SSD drive.
I put everything on the board turned on the power. The fan is spinning, nothing crazy is happening, so I am happy. I connected a monitor to it and….. nothing, no video… Luckily I had a PCI video card available and add this to this build. Yes, I have video and everything is working. BUT, the videocard is not fitting into this case. So, I want this build working without the videocard. I checked the ASRock site and found that this CPU should work without a videocard. To be sure that it was not the MB which had an issue, I tried a different CPU. The video was working…. Hmm crazy… Is my CPU broken? Then I tried to put different firmware on my CPU. Crazy thing was, that one of the lowest firmware was working without the videocard. So, my CPU was not broken, it looks like something in the firmware. I start a discussion with ASRock, but in their lab, everything was working. In the end my build was working on the lower firmware, so I keep it that way.
In the end we have a great server build. What did I learned? If you are building a pc or server from older parts, it is a lot harder to get everything working. It is probably cheaper to buy everything new.