Fedora Hub

July 24, 2017

Fedora Magazine

Easy backups with Déjà Dup

Welcome to part 3 in the series on taking smart backups with duplicity. This article will show how to use Déjà Dup, a GTK+ program to quickly back up your personal files.

Déjà Dup is a graphical frontend to Duplicity. Déjà Dup handles the GPG encryption, scheduling and file inclusion for you, presenting a clean and simple backup tool. From the project’s mission:

Déjà Dup aims squarely at the casual user. It is not designed for system administrators, but rather the less technically savvy.

It is also not a goal to support every desktop environment under the sun. A few popular environments should be supported, subject to sufficient developer-power to offer tight integration for each.

Déjà Dup integrates very well with GNOME, making it an excellent choice for a quick backup solution for Fedora Workstation.

Installing Déjà Dup

Déjà Dup can be found in GNOME Software’s Utilities category.


Alternatively, Déjà Dup can be installed with dnf:

dnf install deja-dup

Once installed, launch Déjà Dup from the Overview.

Déjà Dup presents 5 sections to configure your backup.

  • Overview
  • Folders to save
  • Folders to ignore
  • Storage location
  • Scheduling

Folders to save

Similar to selecting directories for inclusion using duplicity‘s --inclusion option, Déjà Dup stores directories to include in a list. The default includes your home directory. Add any additional folders you wish to back up to this list.

Perhaps your entire home directory is too much to back up. Or parts of your home directory are backed up using version control. In that case, remove “Home” from the list and add just the folders you want to back up. For example, ~/Documents and ~/Projects.

Folders to ignore

These folders are going to be excluded, similar to the --exclude option. Starting with the defaults, add any other folders you wish to exclude. One such directory might be ~/.cache. Consider whether to exclude this carefully. GNOME Boxes stores VM disks inside ~/.cache. If those virtual disks contain data that needs to be backed up, you might want to include ~/.cache after all.

Launch Files and from the Action menu, turn on the Show Hidden Files option. Now in Déjà Dup, click the “plus” button and find Homecache. The list should look like this afterwards:

Storage location

The default storage location is a local folder. This doesn’t meet the “Remote” criteria, so be sure to select an external disk or Internet service such as Amazon S3 or Rackspace Cloud Files. Select Amazon S3 and paste in the access key ID you saved in part 1. The Folder field allows you to customize the bucket name (it defaults to $HOSTNAME).

Scheduling

This section gives you options around frequency and persistence. Switching Automatic backup on will immediately start the deja-dup-monitor program. This program runs inside a desktop session and determines when to run backups. It doesn’t use cron, systemd or any other system scheduler.

Back up

The first time the Back Up utility runs, it prompts you to configure the backup location. For Amazon S3, provide the secret access key you saved in part 1. If you check Remember secret access key, Déjà Dup saves the access key into the GNOME keyring.

Next, you must create a password to encrypt the backup. Unlike the specified GPG keys used by duply, Déjà Dup uses a symmetric cipher to encrypt / decrypt the backup volumes. Be sure to follow good password practices when picking the encryption password. If you check Remember password, your password is saved into the GNOME keyring.

Click Continue and Déjà Dup does the rest. Depending on the frequency you selected, this “Backing up…” window will appear whenever a backup is taking place.

Conclusion

Déjà Dup deviates from the backup profiles created in part 1 and part 2 in a couple specific areas. If you need to encrypt backups using a common GPG key, or need to create multiple backup profiles that run on different schedules, duplicity or duply might be a better choice. Nevertheless, Déjà Dup does an excellent job at making data back up easy and hassle-free.

by Link Dupont at July 24, 2017 08:00 AM

xkcd.com

July 21, 2017

Fedora Magazine

Changing Fedora kernel configuration options

Fedora aims to provide a kernel with as many configuration options enabled as possible. Sometimes users may want to change those options for testing or for a feature Fedora doesn’t support. This is a brief guide to how kernel configurations are generated and how to best make changes for a custom kernel.

Finding the configuration files

Fedora generates kernel configurations using a hierarchy of files. Kernel options common to all architectures and configurations are listed in individual files under baseconfig. Subdirectories under baseconfig can override the settings as needed for architectures. As an example:

$ find baseconfig -name CONFIG_SPI
baseconfig/x86/CONFIG_SPI
baseconfig/CONFIG_SPI
baseconfig/arm/CONFIG_SPI
$ cat baseconfig/CONFIG_SPI
# CONFIG_SPI is not set
$ cat baseconfig/x86/CONFIG_SPI
CONFIG_SPI=y
$ cat baseconfig/arm/CONFIG_SPI
CONFIG_SPI=y

As shown above, CONFIG_SPI is initially turned off for all architectures but x86 and arm enable it.

The directory debugconfig contains options that get enabled in kernel debug builds. The file config_generation lists the order in which directories are combined and overridden to make configs. After you change a setting in one of the individual files, you must run the script build_configs.sh to combine the individual files into configuration files. These exist in kernel-$flavor.config.

When rebuilding a custom kernel, the easiest way to change kernel configuration options is to put them in kernel-local. This file is merged automatically when building the kernel for all configuration options. You can set options to be disabled (# CONFIG_FOO is not set), enabled (CONFIG_FOO=y), or modular (CONFIG_FOO=M) in kernel-local.

Catching and fixing errors in your configuration files

The Fedora kernel build process does some basic checks on configuration files to help catch errors. By default, the Fedora kernel requires that all kernel options are  explicitly set. One common error happens when enabling one kernel option exposes another option that needs to be set. This produces errors related to .newoptions, as an example:

+ Arch=x86_64
+ grep -E '^CONFIG_'
+ make ARCH=x86_64 listnewconfig
+ '[' -s .newoptions ']'
+ cat .newoptions
CONFIG_R8188EU
+ exit 1
error: Bad exit status from /var/tmp/rpm-tmp.6BXufs (%prep)

RPM build errors:
 Bad exit status from /var/tmp/rpm-tmp.6BXufs (%prep)

To fix this error, explicitly set the options (CONFIG_R8188EU in this case) in kernel-local as well.

Another common mistake is setting an option incorrectly. The kernel Kconfig dependency checker silently changes configuration options that are not what it expects. This commonly happens when one option selects another option, or has a dependency that isn’t satisfied. Fedora attempts a basic sanity check that the options specified in tree match what the kernel configuration engine expects. This may produce errors related to mismatches:

+ ./check_configs.awk configs/kernel-4.13.0-i686-PAE.config temp-kernel-4.13.0-i686-PAE.config
+ '[' -s .mismatches ']'
+ echo 'Error: Mismatches found in configuration files'
Error: Mismatches found in configuration files
+ cat .mismatches
Found CONFIG_I2C_DESIGNWARE_CORE=y  after generation, had CONFIG_I2C_DESIGNWARE_CORE=m in Fedora tree
+ exit 1

In this example, the Fedora configuration specified CONFIG_I2C_DESIGNWARE_CORE=m, but the kernel configuration engine set it to CONFIG_I2C_DESIGNWARE_CORE=y. The kernel configuration engine is ultimately what gets used, so the solution is either to change the option to what the kernel expects (CONFIG_I2C_DESIGNWARE_CORE=y in this case) or to further investigate what is causing the unexpected configuration setting.

Once the kernel configuration options are set to your liking, you can follow standard kernel build procedures to build your custom kernel.

by Laura Abbott at July 21, 2017 08:00 AM

xkcd.com

July 19, 2017

Fedora Magazine

Use a DoD smartcard to access CAC enabled websites

By now you’ve likely heard the benefits of two factor authentication. Enabling multi-factor authentication can increase the security of accounts you use to access various social media websites like Twitter, Facebook, or even your Google Account. This post is going to be about a bit more.

The U.S. Armed Services spans millions of military and civilian employees. If you’re a member of these services, you’ve probably been issued a DoD CAC smartcard to access various websites. With the smartcard comes compatibility issues, specific instructions tailored to each operating system, and a host of headaches. It’s difficult to find reliable instructions to access military websites from Linux operating systems. This article shows you how to set up your Fedora system to login to DoD CAC enabled websites.

Installing and configuring OpenSC

First, install the opensc package:

sudo dnf install -y opensc

This package provides the necessary middleware to interface with the DoD Smartcard. It also includes tools to test and debug the functionality of your smartcard.

With that installed, next set it up under the Security Devices section of Firefox. Open the menu in Firefox, and navigate to Preferences -> Advanced.

In the Certificates tab, select Security Devices. From this page select the Load button on the right side of the page. Now set a module name (“OpenSC” will work fine) and use this screen to browse to the location of the shared library you need to use.

Browse to the /lib64/pkcs11/ directory, select opensc-pkcs11.so, and click Open. If you’re currently a “dual status” employee, you may wish to select the onepin-opensc-pkcs11.so shared library. If you have no idea what “dual status” means, carry on and simply select the former package.

Click OK to finish the process.

Now you can navigate to your chosen DoD CAC enabled site and login. You’ll be prompted to enter the PIN for your CAC, then select a certificate to use. If you’re logging into a normal DoD website, select the Authentication certificate. If you’re logging into a webmail service such as https://web.mail.mil, select the Digital Signing certificate. NOTE: “Dual status” personnel should use the Authentication certificate.

by Khris Byrd at July 19, 2017 04:41 PM

xkcd.com

July 17, 2017

Fedora Magazine

Enhancing smart backups with Duply

Welcome to Part 2 in a series on taking smart backups with duplicity. This article builds on the basics of duplicity with a tool called duply.

Duply is a frontend for duplicity that integrates smoothly with recurring tools like cron or systemd. Its headline features are:

  • keeps recurring settings in profiles per backup job
  • automates import/export of keys between profile and keyring
  • enables batch operations eg. backup_verify_purge
  • runs pre/post scripts
  • precondition checking for flawless duplicity operation

The general form for running duply is:

duply PROFILE COMMAND [OPTIONS]

Installation

duply is available in the Fedora repositories. To install it, use the sudo command with dnf:

dnf install duply

Create a profile

duply stores configuration settings for a backup job in a profile. To create a profile, use the create command.

$ duply documents create

Congratulations. You just created the profile 'documents'.
The initial config file has been created as 
'/home/link/.duply/documents/conf'.
You should now adjust this config file to your needs.

IMPORTANT:
  Copy the _whole_ profile folder after the first backup to a safe place.
  It contains everything needed to restore your backups. You will need 
  it if you have to restore the backup from another system (e.g. after a 
  system crash). Keep access to these files restricted as they contain 
  _all_ informations (gpg data, ftp data) to access and modify your backups.

  Repeat this step after _all_ configuration changes. Some configuration 
  options are crucial for restoration.

The newly created profile includes two files: conf and exclude. The main file, conf, contains comments for variables necessary to run duply. Read over the comments for any settings unique to your backup environment. The important ones are SOURCE, TARGET, GPG_KEY and GPG_PW.

To convert the single invocation of duplicity from the first article, split it into 4 sections:

duplicity --name duply_documents --encrypt-sign-key **************** --include $HOME/Documents --exclude '**'  $HOME   s3+http://**********-backup-docs
          [                         OPTIONS                        ] [                 EXCLUDES             ] [SOURCE] [             TARGET           ]

Comment out the lines starting with TARGET, SOURCE, GPG_KEY and GPG_PW by adding # in front of each line. Add the following lines to conf:

SOURCE=/home/link
TARGET=s3+http://**********-backup-docs
GPG_KEY=****************
GPG_PW=************
AWS_ACCESS_KEY_ID=********************
AWS_SECRET_ACCESS_KEY=****************************************

The second file, exclude, stores file paths to include/exclude from the backup. In this case, add the following to $HOME/.duply/documents/exclude.

+ /home/link/Documents
- **

Running duply

Run a backup with the backup command. An example run appears below.

$ duply documents backup
Start duply v2.0.2, time is 2017-07-04 17:14:03.
Using profile '/home/link/.duply/documents'.
Using installed duplicity version 0.7.13.1, python 2.7.13, gpg 1.4.21 (Home: ~/.gnupg), awk 'GNU Awk 4.1.4, API: 1.1 (GNU MPFR 3.1.5, GNU MP 6.1.2)', grep 'grep (GNU grep) 3.0', bash '4.4.12(1)-release (x86_64-redhat-linux-gnu)'.
Autoset found secret key of first GPG_KEY entry 'XXXXXXXXXXXXXXXX' for signing.
Checking TEMP_DIR '/tmp' is a folder and writable (OK)
Test - Encrypt to 'XXXXXXXXXXXXXXXX' & Sign with 'XXXXXXXXXXXXXXXX' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.15349.1499213643_*'(OK)
Backup PUB key 'XXXXXXXXXXXXXXXX' to profile. (OK)
Write file 'gpgkey.XXXXXXXXXXXXXXXX.pub.asc' (OK)
Backup SEC key 'XXXXXXXXXXXXXXXX' to profile. (OK)
Write file 'gpgkey.XXXXXXXXXXXXXXXX.sec.asc' (OK)

INFO:

duply exported new keys to your profile.
You should backup your changed profile folder now and store it in a safe place.


--- Start running command PRE at 17:14:04.115 ---
Skipping n/a script '/home/link/.duply/documents/pre'.
--- Finished state OK at 17:14:04.129 - Runtime 00:00:00.014 ---

--- Start running command BKP at 17:14:04.146 ---
Reading globbing filelist /home/link/.duply/documents/exclude
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Tue Jul  4 14:16:00 2017
Reuse configured PASSPHRASE as SIGN_PASSPHRASE
--------------[ Backup Statistics ]--------------
StartTime 1499213646.13 (Tue Jul  4 17:14:06 2017)
EndTime 1499213646.40 (Tue Jul  4 17:14:06 2017)
ElapsedTime 0.27 (0.27 seconds)
SourceFiles 1205
SourceFileSize 817997271 (780 MB)
NewFiles 1
NewFileSize 4096 (4.00 KB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 1
RawDeltaSize 0 (0 bytes)
TotalDestinationSizeChange 787 (787 bytes)
Errors 0
-------------------------------------------------

--- Finished state OK at 17:14:07.789 - Runtime 00:00:03.643 ---

--- Start running command POST at 17:14:07.806 ---
Skipping n/a script '/home/link/.duply/documents/post'.
--- Finished state OK at 17:14:07.823 - Runtime 00:00:00.016 ---

Remember duply is a wrapper around duplicity. Because you specified –name during the backup creation in part 1, duply picked up the local cache for the documents profile. Now duply runs an incremental backup on top of the full one created last week.

Restoring a file

duply offers two commands for restoration. Restore the entire backup with the restore command.

$ duply documents restore ~/Restore
Start duply v2.0.2, time is 2017-07-06 22:06:23.
Using profile '/home/link/.duply/documents'.
Using installed duplicity version 0.7.13.1, python 2.7.13, gpg 1.4.21 (Home: ~/.gnupg), awk 'GNU Awk 4.1.4, API: 1.1 (GNU MPFR 3.1.5, GNU MP 6.1.2)', grep 'grep (GNU grep) 3.0', bash '4.4.12(1)-release (x86_64-redhat-linux-gnu)'.
Autoset found secret key of first GPG_KEY entry 'XXXXXXXXXXXXXXXX' for signing.
Checking TEMP_DIR '/tmp' is a folder and writable (OK)
Test - Encrypt to 'XXXXXXXXXXXXXXXX' & Sign with 'XXXXXXXXXXXXXXXX' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.12704.1499403983_*'(OK)

--- Start running command RESTORE at 22:06:24.368 ---
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Jul 6 21:46:01 2017
--- Finished state OK at 22:06:44.216 - Runtime 00:00:19.848 ---

Restore a single file or directory with the fetch command.

$ duply documents fetch Documents/post_install ~/Restore
Start duply v2.0.2, time is 2017-07-06 22:11:11.
Using profile '/home/link/.duply/documents'.
Using installed duplicity version 0.7.13.1, python 2.7.13, gpg 1.4.21 (Home: ~/.gnupg), awk 'GNU Awk 4.1.4, API: 1.1 (GNU MPFR 3.1.5, GNU MP 6.1.2)', grep 'grep (GNU grep) 3.0', bash '4.4.12(1)-release (x86_64-redhat-linux-gnu)'.
Autoset found secret key of first GPG_KEY entry 'XXXXXXXXXXXXXXXX' for signing.
Checking TEMP_DIR '/tmp' is a folder and writable (OK)
Test - Encrypt to 'XXXXXXXXXXXXXXXX' & Sign with 'XXXXXXXXXXXXXXXX' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.14438.1499404312_*'(OK

--- Start running command FETCH at 22:11:52.517 ---
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Jul 6 21:46:01 2017
--- Finished state OK at 22:12:44.447 - Runtime 00:00:51.929 ---

duply includes quite a few commands. Read the documentation for a full list of commands.

Other features

Timer runs become easier with duply than with duplicity. The systemd user session lets you create automated backups for your data. To do this, modify ~/.config/systemd/user/backup.service, replacing ExecStart=/path/to/backup.sh with ExecStart=duply documents backup. The wrapper script backup.sh is no longer required.

duply also makes a great tool for backing up a server. You can create system wide profiles inside /etc/duply to backup any part of a server. Now with a combination of system and user profiles, you’ll spend less time worrying about your data being backed up.

by Link Dupont at July 17, 2017 08:00 AM

xkcd.com

July 14, 2017

Fedora Magazine

What’s new in the Anaconda Installer for Fedora 26 ?

Fedora 26 is available now, providing a wide range of improvements across the entire operating system. Anaconda — the Fedora installer — has many new features and improvements implemented for Fedora 26. The most visible addition is the introduction of Blivet GUI, providing power users an alternate way to configure partitioning. Additionally, there are improvements to automated installation with kickstart, a range of networking improvements, better status reporting when your install is under way, and much more.

Enhanced storage configuration with Blivet GUI

The main highlight of Anaconda in Fedora 26 is the integration of Blivet GUI storage configuration tool into the installation environment. Previously, there were two options for storage configuration — automatic partitioning and manual partitioning. Automatic partitioning is mostly useful for really simple configurations, like installing Fedora on an empty hard drive or alongside of another existing operating system. The existing manual partitioning tool provides more control over partition layouts and size, enabling more complicated setups.

The previously-available manual partitioning tool is quite unique. Instead of creating all the storage components manually, the user just specifies future mountpoints and their properties. For example, you simply create two “mountpoints” for /home and / (root), specify properties like encryption or RAID and Anaconda properly configures all the necessary components below. This top-down model is really powerful and easy to use but might be too simple for some complicated storage setups.

This is where Blivet GUI can help — it is a storage configuration tool that works in the standard way. If you want an LVM storage on top of RAID, you need to create it manually from the building blocks — from bottom up. With a good knowledge of custom partitioning layouts, complicated custom storage setups are easily created with BlivetGUI.

Using Blivet GUI in Anaconda

Blivet GUI has been available from Fedora repositories as a standalone desktop application since Fedora 21 and now comes also to Anaconda as a third option for storage configuration. Simply choose Advanced Custom (Blivet-GUI) from the Installation Destination window in Anaconda.

Installation Destination window in Anaconda

Installation Destination window in Anaconda

Blivet GUI has full integration into the Anaconda installation workflow. Only the selected disks in the Installation Destination window show in BlivetGUI. Changes remain unwritten to the disks until you leave the window and choose Begin Installation. Additionally, you can always go back and use one of the other partitioning methods. However, Blivet GUI discards changes if you switch to a different partitioning method.

Storage configuration using Blivet GUI in Anaconda

Storage configuration using Blivet GUI in Anaconda

Adding a new device using Blivet GUI in Anaconda

Adding a new device using Blivet GUI in Anaconda

Automated install (kickstart) improvements

Kickstart is the configuration file format for automation of the installation process. A Kickstart file can configure all options available in the graphical and text interfaces and much more. View the Kickstart documentation for more information about Kickstart and how to use it.

Support for –nohome, –noswap and –noboot options in auto partitioning

When you don’t want to specify your partitions, you can let anaconda do that for you with the autopart command. It will automatically create a root partition, a swap partition and a boot partition. With Fedora Workstation, the installed creates a /home partition on large enough drives. To make the auto partitioning more flexible, anaconda now supports the –nohome, –noswap and –noboot options that disable the creation of the given partition.

Strict validation of the kickstart file with inst.ksstrict

It is not uncommon for sysadmins to have complicated kickstart files, and sometimes kickstart files have errors. At the beginning of the installation, anaconda checks the kickstart file and produces errors and warnings. Errors result in the termination of the installation.The log records warnings and the installation continues.

To ensure a kickstart file doesn’t produce any warnings, enable the new strict validation with the boot option inst.ksstrict. This treats warnings in kickstart in the same way as errors.

Snapshot support

Sometimes it is helpful to save an old installation or have a backup of freshly installed system for recovery. For these situations the snaphost kickstart command is now available. View the pykickstart documentation for full usage instructions of this new command. This feature is currently supported on LVM thin pools only. To request support for other partition types please file an RFE bug on bugzilla.

Networking improvements

Networking is a critical part of Anaconda as many installations are partially or fully network based. Anaconda also supports installation to network attached storage devices such as iSCSI, FCoE and Multipass. For this reason Anaconda needs to be able to support complex networking setups not only to even just start the installation but also to correctly setup networking for the installed system.

For this Fedora cycle we have mostly bug fixes, adaptation to NetworkManager rebase, enhancements to the network kickstart tests suite to discover issues caused by NM changes and some changes in components we are using in general. Also adding support for various – mostly enterprise driven – features:

  • Support for IPoIB (IP over infiniband) devices in TUI.
  • Support for setting up bridge device at early stage of installation (eg to fetch kickstart).
  • New inst.waitfornet boot option for waiting for connectivity at later stage of installation in cases where default waiting for DHCP configuration is not sufficient due to special network environment (DHCP servers) setup.

Other improvements

Anaconda and Pykickstart documentation on Read the Docs

The Anaconda and Pykickstart documentation have a new home on ReadTheDocs:

Also the Pykickstart documentation now contains full detailed kickstart command reference, both for Fedora and RHEL.

Progress reporting for all installation phases

Do you also hate it when Anaconda says “processing post installation setup tasks” for many minutes (or even tens of minutes!) without any indication what’s actually going on and how much longer it might take?

The cause of the previous lack of status reporting was simple – during the final parts of the RPM installation transaction RPM post & posttrans scriptlets are running and that can take a significant amount of time. And until recently there was no support from RPM and DNF for progress reporting from this installation phase.

But this has been rectified, RPM & DNF now provide the necessary progress reporting, so Anaconda can finally report what’s actually happening during the full installation run. 🙂

Run Initial Setup TUI on all usable consoles

Initial Setup is a utility to configure a freshly installed system on the first start. Initial Setup provides both graphical and text-mode interfaces and is basically just a launcher for the configuration screens normally provided by Anaconda.

During a “normal” installation everything is configured in Anaconda and Initial Setup does not run. However, the situation is different for the various ARM boards supported by Fedora. Here the installation step is generally skipped and users boot from a Fedora image on an SD card. In this scenario Initial Setup is a critical component, enabling users to customize the pre-made system image as needed.

The Initial Setup text interface (TUI) is generally used on ARM systems. During the Fedora 25 time frame two nasty issues showed up:

  • some ARM board have both serial and graphical consoles with no easy way detect which the user is using
  • some ARM board consoles appear functional, but throw errors when Initial Setup tries to run the TUI on them

To solve these issues, the Initial Setup TUI is run on all consoles that appear to be usable. This solves the first issue – the TUI will run on both the serial and graphical consoles. It also solves the second issue, as consoles that fail to work as expected are simply skipped.

Built in help is now also available for the TUI

Previously, only the graphical installation mode featured help. However, help is accessible in the TUI from every screen that offers the ‘h to help’ option.

Help displayed in the TUI

Help displayed in the TUI

New log-capture script

The new log-capture script is an addition from community contributor Pat Riehecky. This new script makes it easy to gather many installation relevant log files into a tarball, which is easily transferred outside of the installation environment for detailed analysis.

The envisioned use case is running the log-capture script in kickstart %onerror scriptlets.

 

Structured installation tasks

Anaconda does a lot of things during the installation phase (configures storage, installs packages, creates users & groups, etc.). To make the installation phase easier to monitor and to debug any issues the individual installation tasks are now distinct units (eq. user creation, user group creation, root user configuration.) that can be part of task groups (eq. user & group configuration).

End result – it is now easy to see in the logs how long each task took to execute, which task is currently running & how many tasks still need to be executed until the installation is done.

User interaction config file

Anaconda supports the new user interaction config file. A special configuration file to record the screens and (optionally) the settings manipulated by the user.

The main idea behind the user interaction config file is that a user generally comes into contact with multiple separate applications (Anaconda, Gnome Initial Setup, Initial Setup, a hypothetical language selector on a live CD, etc.) during an installation run and it would make sense to only present each configuration option (say language or timezone selection) only once and not multiple times. This should help to reduce the amount of screens a user needs to click through, making the installation faster.

Anaconda will record visited screens and will hide screens marked as visited in an existing user interaction config file. But once other pre & post installations tools (such as for example Gnome Initial Setup) start picking up support it should be easy to spot as users should no longer be asked to configure the same setting twice. But we might not have to wait for long as a Fedora 27 change proposal for adding Gnome Initial Setup support already exists.

by Martin Kolman at July 14, 2017 08:00 AM

xkcd.com

July 12, 2017

Fedora Magazine

Introducing the Python Classroom Lab

The Fedora Labs are a selection of curated bundles of purpose-driven software and content as curated and maintained by members of the Fedora Community. Some of the current Labs are the Design Suite, the Security Lab, and the Robotics Suite. The recent release of Fedora 26 includes the brand new Python Classroom Lab.

The new Python Classroom Lab is a ready-to-use operating system for teachers to use Fedora in their classrooms. The Lab comes pre-installed with a bunch of useful tools for teaching Python. The Python Classroom has 3 variants: a live image based on the GNOME desktop, a Vagrant image, and a Docker container.

Multiple Python interpreters

The Lab includes several python interpreters by default, including CPython 3.6, CPython 2.7 and PyPy 3.3. Additionally, tox is available by default to assist running Python code on the different Python implementations.

Tools, Libraries, and Applications

The Python Classroom Lab also includes a range of tools, libraries, and applications that are useful when learning Python. The Scientific Python stack provides Python libraries for scientific computation and visualization. IPython — an enhanced Python Shell — is also installed by default. Additionally, Jupyter Notebook is included, providing a web-based environment for interactive computing and visualizations. Other tools and applications include virtualenv, the Ninja IDE, and the Python Integrated Development and Learning Environment (IDLE)

by Ryan Lerch at July 12, 2017 09:12 AM

xkcd.com

July 11, 2017

Fedora Magazine

Fedora 26 is here!

[This message comes from the desk of the Fedora Project Leader directly. Happy release day! — Ed.] 

Hi everyone! I’m incredibly proud to announce the immediate availability of Fedora 26. Read more below, or just jump to download from:

If you’re already using Fedora, you can upgrade from the command line or using GNOME Software — upgrade instructions here. We’ve put a lot of work into making upgrades easy and fast. In most cases, this will take half an hour or so, bringing you right back to a working system with no hassle.

What’s new in Fedora 26?

First, of course, we have thousands of improvements from the various upstream software we integrate, including new development tools like GCC 7, Golang 1.8, and Python 3.6. We’ve added a new partitioning tool to Anaconda (the Fedora installer) — the existing workflow is great for non-experts, but this option will be appreciated by enthusiasts and sysadmins who like to build up their storage scheme from basic building blocks. F26 also has many under-the-hood improvements, like better caching of user and group info and better handling of debug information. And the DNF package manager is at a new major version (2.5), bringing many new features. Really, there’s new stuff everywhere — read more in the release notes.

So many Fedora options…

Fedora Workstation is built on GNOME (now version 3.24). If you’re interested in other popular desktop environments like KDE, Xfce, Cinnamon, and more, check out Fedora Spins. Or, for versions of Fedora tailored to special use cases like Astronomy, Design, Security, or Robotics, see Fedora Labs. STEM teachers, take advantage of the new Python Classroom, which makes it a breeze to set up an instructional environment with Vagrant, Docker containers, a Live USB image, or traditional installation.

If you want a Fedora environment to build on in EC2, OpenStack, and other cloud environments, there’s the Fedora Cloud Base. Plus, we’ve got network installers, other architectures (like Power and aarch64), BitTorrent links, and more at Fedora Alternative Downloads. And, not to be forgotten: if you’re looking to put Fedora on a Raspberry Pi or other ARM device, get images from the Fedora ARM page.

Whew! Fedora makes a lot of stuff! I hope there’s something for everyone in all of that, but if you don’t find what you want, you can Join the Fedora Project and work with us to create it. Our mission is to build a platform which enables contributors and other developers to solve all kinds of user problems, on our foundations of Freedom, Friendship, Features, and First. If the problem you want to solve isn’t addressed, Fedora can help you fix that.

Coming soon

Meanwhile, we have many interesting things going on in Fedora behind the scenes. Stay tuned later this week for Fedora Boltron, a preview of a new way to put together Fedora Server from building blocks which move at different speeds. (What if my dev stack was a rolling release on a stable base? Or, could I get the benefits from base platform updates while keeping my web server and database at known versions?) We’re also working on a big continuous integration project focused on Fedora Atomic, automating testing so developers can work rapidly without breaking things for others.

Thanks to the whole Fedora community!

Altogether, I’m confident that this is the best Fedora release ever — yet again. That’s because of the dedication, hard work, and love from thousands of Fedora contributors every year. This is truly an amazing community project from an amazing group of people. This time around, thanks are particularly due to everyone from quality assurance and release engineering who worked over the weekend and holidays to get Fedora 26 to you today.

Oh, and one more thing… in the human world, even the best release ever can’t be perfect. There are always corner cases and late-breaking issues. Check out Common F26 Bugs if you run into something strange. If you find a problem, help us make things better. But mostly, enjoy this awesome new release.

 

— Matthew Miller, Fedora Project Leader

by Matthew Miller at July 11, 2017 02:00 PM

Upgrading Fedora 25 to Fedora 26

Fedora 26 was just officially released. You’ll likely want to upgrade your system to the latest version of Fedora. Fedora offers a command-line method for upgrading Fedora 25 to Fedora 26. The Fedora 25 Workstation also has a graphical upgrade method.

Upgrading Fedora 25 Workstation to Fedora 26

Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the GNOME Software app. Or you can choose Software from GNOME Shell.

Choose the Updates tab in GNOME Software and you should see a window like this:

If you don’t see anything on this screen, try using the reload tool at the top left. It may take some time after release for all systems to be able to see an upgrade available.

Choose Download to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.

Using the command line

If you’ve upgraded from past Fedora releases, you are likely familiar with the dnf upgrade plugin. This method is the recommended and supported way to upgrade from Fedora 25 to Fedora 26. Using this plugin will make your upgrade to Fedora 26 simple and easy.

1. Update software and back up your system

Before you do anything, you will want to make sure you have the latest software for Fedora 25 before beginning the upgrade process. To update your software, use GNOME Software or enter the following command in a terminal.

sudo dnf upgrade --refresh

Additionally, make sure you back up your system before proceeding. For help with taking a backup, see the backup series on the Fedora Magazine.

2. Install the DNF plugin

Next, open a terminal and type the following command to install the plugin:

sudo dnf install dnf-plugin-system-upgrade

3. Start the update with DNF

Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:

sudo dnf system-upgrade download --releasever=26

This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the –allowerasing flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.

Upgrading to Fedora 26: Starting upgrade

 

4. Reboot and upgrade

Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:

sudo dnf system-upgrade reboot

Your system will restart after this. Many releases ago, the fedup tool would create a new option on the kernel selection / boot screen. With the dnf-plugin-system-upgrade package, your system reboots into the current kernel installed for Fedora 25; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.

Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 26 system.

Upgrading Fedora: Upgrade in progress

Upgrading Fedora: Upgrade complete!

Resolving upgrade problems

On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the DNF system upgrade wiki page for more information on troubleshooting in the event of a problem.

If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.

Using Fedora Atomic Host?

Please see the Atomic host upgrade instructions.

Further information

For more detailed instructions on using dnf for upgrading, including a breakdown of other flags, check out the DNF system upgrade wiki article. This page also has frequently asked questions you may have during an upgrade.

Happy upgrades!

by Justin W. Flory at July 11, 2017 01:59 PM

What’s New in Fedora 26 Workstation

Fedora 26 Workstation is the latest release of our free, leading-edge operating system. You can download it from the official website here right now. There are several new and noteworthy changes in Fedora Workstation.

GNOME 3.24

Fedora Workstation features the newest version of the GNOME desktop. GNOME now has a Natural Light Filter feature that changes your display’s color temperature. It works based on the time of day and helps prevent sleeplessness and eye strain.

There are updates to the Settings panel for online accounts, printers, and users. The notifications area sports a cleaner, simpler layout, with integrated weather information.

For developers, Builder now features improved support for systems like Flatpak, CMake, Meson, and Rust. It also integrates Valgrind to help profile your project. There are numerous other improvements, which you can find in the GNOME 3.24 release notes.

Improved Qt app compatibility

The Adwaita theme contains many improvements and looks closer to its GTK counterpart than ever. There are also two variants ported to Qt, dark and high contrast. If you switch to dark or high contrast Adwaita, your Qt apps will switch as well.

LibreOffice 5.3

The latest version of the popular office suite features many changes. It includes a preview of the experimental new NotebookBar UI. There’s also a new internal text layout engine to ensure consistent text layout on all platforms.

Fedora Media Writer

The new version of the Fedora Media Writer can create bootable SD cards with Fedora for ARM devices such as Raspberry Pi. It also features better support for Windows 7 and screenshot handling. The utility also notifies you when a new release of Fedora is available.

Other notes

These are only some of the improvements in Fedora 26. Fedora also gives you access to thousands of software apps our community provides. Many have been updated since the previous release as well.

Fedora 26 is available now for download.

by Paul W. Frields at July 11, 2017 01:50 PM

July 10, 2017

Fedora Magazine

Taking smart backups with Duplicity

Backing up data is one of the most important tasks everyone should be doing regularly. This series will demonstrate using three software tools to backup your important data.

When planning a backup strategy, consider the “Three Rs of backup”:

  • Redundant: Backups must be redundant. Backup media can fail. The backup storage site can be compromised (fire, theft, flood, etc.). It’s always a good idea to have more than one destination for backup data.
  • Regular: Backups only help if you run them often. Schedule and run them regularly to keep adding new data and prune off old data.
  • Remote: At least one backup should be kept off-site. In the case of one site being physically compromised (fire, theft, flood, etc.), the remote backup becomes a fail-safe.

duplicity is an advanced commandline backup utility built on top of librsync and GnuPG. By producing GPG-encrypted backup volumes in tar-format, it offers secure incremental archives (a huge space saver, especially when backing up to remote services like S3 or an FTP server).

To get started, install duplicity:

dnf install duplicity

Choose a backend

duplicity supports a lot of backend services categorized into two groups: hosted storage providers and local media. Selecting a backend is mostly a personal preference,  but select at least two (Redundant). This article uses an Amazon S3 bucket as an example backend service.

Set up GnuPG

duplicity encrypts volumes before uploading them to the specified backend using a GnuPG key. If you haven’t already created a GPG key, follow GPG key management, part 1 to create one. Look up the long key ID and keep it nearby:

gpg2 --list-keys --keyid-format long [email protected]

Set up Amazon AWS

AWS recommends using individual accounts to isolate programmatic access to your account. Log into the AWS IAM Console. If you don’t have an AWS account, you’ll be prompted to create one.

Click on Users in the list of sections on the left. Click the blue Add user button. Choose a descriptive user name, and set the Access type to Programmatic access only. There is no need for a backup account to have console access.

Next, attach the AmazonS3FullAccess policy directly to the account. duplicity needs this policy to create the bucket automatically the first time it runs.


After the user is created, save the access key ID and secret access key. They are required by duplicity when connecting to S3.

Choose backup data

When choosing data to back up, a good rule of thumb is to back up data you’ve created that can’t be re-downloaded from the Internet. Good candidates that meet this criteria are ~/Documents and ~/Pictures. Source code and “dot files” are also excellent candidates if they aren’t under version control.

Create a full backup

The general form for running duplicity is:

duplicity [OPTIONS] SRC DEST

In order to backup ~/Documents, but preserve the Documents folder within the backup volume, run duplicity with $HOME as the source, specify the –include option to include only ~/Documents, and exclude everything else with –exclude ‘**’. The –include and –exclude options can be combined in various ways to create specific file matching patterns. Experiment with these options before creating the initial backup. The –dry-run option simulates running duplicity. This is a great way to preview what a particular duplicity invocation will do.

duplicity will automatically determine whether a full or incremental backup is needed. The first time you run a source/destination, duplicity creates a full backup. Be sure to first export the access key ID and secret access key as environment variables. The –name option enables forward compatibility with duply (coming in part 2). Specify the long form GPG key ID that should be used to sign and encrypt the backup volumes with –encrypt-sign-key.

$ export AWS_ACCESS_KEY_ID=********************
$ export AWS_SECRET_ACCESS_KEY=****************************************
$ duplicity --dry-run --name duply_documents --encrypt-sign-key **************** --include $HOME/Documents --exclude '**' $HOME s3+http://**********-backup-docs
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
GnuPG passphrase: 
GnuPG passphrase for signing key: 
No signatures found, switching to full backup.
--------------[ Backup Statistics ]--------------
StartTime 1499399355.05 (Thu Jul 6 20:49:15 2017)
EndTime 1499399355.09 (Thu Jul 6 20:49:15 2017)
ElapsedTime 0.05 (0.05 seconds)
SourceFiles 102
SourceFileSize 40845801 (39.0 MB)
NewFiles 59
NewFileSize 40845801 (39.0 MB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 59
RawDeltaSize 0 (0 bytes)
TotalDestinationSizeChange 0 (0 bytes)
Errors 0
-------------------------------------------------

When you’re ready, remove the –dry-run option and start the backup. Plan ahead for the initial backup. It can often be a large amount of data and can take hours to upload, depending on your Internet connection.

After the backup is complete, the AWS S3 Console lists the new full backup volume.

 

Create an incremental backup

Run the same command again to create an incremental backup.

$ export AWS_ACCESS_KEY_ID=********************
$ export AWS_SECRET_ACCESS_KEY=****************************************
$ duplicity --dry-run --name duply_documents --encrypt-sign-key **************** --include $HOME/Documents --exclude '**' $HOME s3+http://**********-backup-docs
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Jul 6 20:50:20 2017
GnuPG passphrase: 
GnuPG passphrase for signing key: 
--------------[ Backup Statistics ]--------------
StartTime 1499399964.77 (Thu Jul 6 20:59:24 2017)
EndTime 1499399964.79 (Thu Jul 6 20:59:24 2017)
ElapsedTime 0.02 (0.02 seconds)
SourceFiles 60
SourceFileSize 40845801 (39.0 MB)
NewFiles 3
NewFileSize 8192 (8.00 KB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 3
RawDeltaSize 0 (0 bytes)
TotalDestinationSizeChange 845 (845 bytes)
Errors 0
-------------------------------------------------

Again, the AWS S3 Console lists the new incremental backup volumes.

Restore a file

Backups aren’t useful without the ability to restore from them. duplicity makes restoration straightforward by simply reversing the SRC and DEST in the general form: duplicity [OPTIONS] DEST SRC.

$ export AWS_ACCESS_KEY_ID=********************
$ export AWS_SECRET_ACCESS_KEY=****************************************
$ duplicity --name duply_documents s3+http://**********-backup-docs $HOME/Restore
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Jul 6 21:46:01 2017
GnuPG passphrase:
$ du -sh Restore/
783M Restore/

This restores the entire backup volume. Specific files or directories are restored using the –file-to-restore option, specifying a path relative to the backup root. For example:

$ export AWS_ACCESS_KEY_ID=********************
$ export AWS_SECRET_ACCESS_KEY=****************************************
$ duplicity --name duply_documents --file-to-restore Documents/post_install s3+http://**********-backup-docs $HOME/Restore
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Tue Jul 4 14:16:00 2017
GnuPG passphrase: 
$ tree Restore/
Restore/
├── files
│ ├── 10-doxie-scanner.rules
│ ├── 99-superdrive.rules
│ └── simple-scan.dconf
└── post_install.sh

1 directory, 4 files

Automate with a timer

The example above is clearly a manual process. “Regular” from the Three R philosophy requires this duplicity command run repeatedly. Create a simple shell script that wraps these environment variables and command invocation.

#!/bin/bash

export AWS_ACCESS_KEY_ID=********************
export AWS_SECRET_ACCESS_KEY=****************************************
export PASSPHRASE=************

duplicity --name duply_documents --encrypt-sign-key **************** --include $HOME/Documents --exclude '**' $HOME s3+http://**********-backup-docs

Notice the addition of the PASSPHRASE variable. This allows duplicity to run without prompting for your GPG passphrase. Save this file somewhere in your home directory. It doesn’t have to be in your $PATH. Make sure  the permissions are set to user read/write/execute only to protect the plain text GPG passphrase.

Now create a timer and service unit to run it daily.

$ cat $HOME/.config/systemd/user/backup.timer
[Unit]
Description=Run duplicity backup timer

[Timer]
OnCalendar=daily
Unit=backup.service

[Install]
WantedBy=default.target
$ cat $HOME/.config/systemd/user/backup.service
[Service]
Type=oneshot
ExecStart=/home/link/backup.sh

[Unit]
Description=Run duplicity backup
$ systemctl --user enable --now backup.timer
Created symlink /home/link/.config/systemd/user/default.target.wants/backup.timer → /home/link/.config/systemd/user/backup.timer.

Conclusion

This article has described a manual process. But the flexibility in creating specific, customized backup targets is one of duplicity’s most powerful features. The duplicity man page has a lot more detail about various options. The next article will build on this one by creating backup profiles with duply, a wrapper program that makes the raw duplicity invocations easier.

by Link Dupont at July 10, 2017 02:36 PM

xkcd.com

July 07, 2017

Fedora Magazine

Clustered computing on Fedora with Minikube

This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered key concepts in Kubernetes. This second post shows you how to build a single-node Kubernetes deployment on your own computer.


Once you have a better understanding of what the key concepts and terminology in Kubernetes are, getting started is easier. Like many programming tutorials, this tutorial shows you how to build a “Hello World” application and deploy it locally on your computer using Kubernetes. This is a simple tutorial because there aren’t multiple nodes to work with. Instead, the only device we’re using is a single node (a.k.a. your computer). By the end, you’ll see how to deploy a Node.js application into a Kubernetes pod and manage it with a deployment on Fedora.

This tutorial isn’t made from scratch. You can find the original tutorial in the official Kubernetes documentation. This article adds some changes that will let you do the same thing on your own Fedora computer.

Introducing Minikube

Minikube is an official tool developed by the Kubernetes team to help make testing it out easier. It lets you run a single-node Kubernetes cluster through a virtual machine on your own hardware. Beyond using it to play around with or experiment for the first time, it’s also useful as a testing tool if you’re working with Kubernetes daily. It does support many of the features you’d want in a production Kubernetes environment, like DNS, NodePorts, and container run-times.

Installation

This tutorial requires virtual machine and container software. There are many options you can use. Minikube supports virtualbox, vmwarefusion, kvm, and xhyve drivers for virtualization. However, this guide will use KVM since it’s already packaged and available in Fedora. We’ll also use Node.js for building the application and Docker for putting it in a container.

Pre-requirements

You can install the prerequisites with this command.

$ sudo dnf install kubernetes libvirt-daemon-kvm kvm nodejs docker

After installing these packages, you’ll need to add your user to the right group to let you use KVM. The following commands will add your user to the group and then update your current session for the group change to take effect.

$ sudo usermod -a -G libvirt $(whoami)
$ newgrp libvirt

Docker KVM drivers

If using KVM, you will also need to install the KVM drivers to work with Docker. You need to add Docker Machine and the Docker Machine KVM Driver to your local path. You can check their pages on GitHub for the latest versions, or you can run the following commands for specific versions. These were tested on a Fedora 25 installation.

Docker Machine
$ curl -L https://github.com/docker/machine/releases/download/v0.12.0/docker-machine-`uname -s`-`uname -m` >/tmp/docker-machine
$ chmod +x /tmp/docker-machine
$ sudo cp /tmp/docker-machine /usr/local/bin/docker-machine
Docker Machine KVM Driver

This installs the CentOS 7 driver, but it also works with Fedora.

$ curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-centos7 >/tmp/docker-machine-driver-kvm
$ chmod +x /tmp/docker-machine-driver-kvm
$ sudo cp /tmp/docker-machine-driver-kvm /usr/local/bin/docker-machine-driver-kvm

Installing Minikube

The final step for installation is getting Minikube itself. Currently, there is no package in Fedora available, and official documentation recommends grabbing the binary and moving it your local path. To download the binary, make it executable, and move it to your path, run the following.

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
$ chmod +x minikube
$ sudo mv minikube /usr/local/bin/

Now you’re ready to build your cluster.

Create the Minikube cluster

Now that you have everything installed and in the right place, you can create your Minikube cluster and get started. To start Minikube, run this command.

$ minikube start --vm-driver=kvm

Next, you’ll need to set the context. Context is how kubectl (the command-line interface for Kubernetes) knows what it’s dealing with. To set the context for Minikube, run this command.

$ kubectl config use-context minikube

As a check, make sure that kubectl can communicate with your cluster by running this command.

$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Build your application

Now that Kubernetes is ready, we need to have an application to deploy in it. This article uses the same Node.js application as the official tutorial in the Kubernetes documentation. Create a folder called hellonode and create a new file called server.js with your favorite text editor.

var http = require('http');

var handleRequest = function(request, response) {
 console.log('Received request for URL: ' + request.url);
 response.writeHead(200);
 response.end('Hello world!');
};
var www = http.createServer(handleRequest);
www.listen(8080);

Now try running your application and running it.

$ node server.js

While it’s running, you should be able to access it on localhost:8080. Once you verify it’s working, hit Ctrl+C to kill the process.

Create Docker container

Now you have an application to deploy! The next step is to get it packaged into a Docker container (that you’ll pass to Kubernetes later). You’ll need to create a Dockerfile in the same folder as your server.js file. This guide uses an existing Node.js Docker image. It exposes your application on port 8080, copies server.js to the image, and runs it as a server. Your Dockerfile should look like this.

FROM node:6.9.2
EXPOSE 8080
COPY server.js .
CMD node server.js

If you’re familiar with Docker, you’re likely used to pushing your image to a registry. In this case, since we’re deploying it to Minikube, you can build it using the same Docker host as the Minikube virtual machine. For this to happen, you’ll need to use the Minikube Docker daemon.

$ eval $(minikube docker-env)

Now you can build your Docker image with the Minikube Docker daemon.

$ docker build -t hello-node:v1 .

Huzzah! Now you have an image Minikube can run.

Create Minikube deployment

If you remember from the first part of this series, deployments watch your application’s health and reschedule it if it dies. Deployments are the supported way of creating and scaling pods. kubectl run creates a deployment to manage a pod. We’ll create one that uses the hello-node Docker image we just built.

$ kubectl run hello-node --image=hello-node:v1 --port=8080

Next, check that the deployment was created successfully.

$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-node   1         1         1            1           30s

Creating the deployment also creates the pod where the application is running. You can view the pod with this command.

$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
hello-node-1644695913-k2314   1/1       Running   0          3

Finally, let’s look at what the configuration looks like. If you’re familiar with Ansible, the configuration files for Kubernetes also use easy-to-read YAML. You can see the full configuration with this command.

$ kubectl config view

kubectl does many things. To read more about what you can do with it, you can read the documentation.

Create service

Right now, the pod is only accessible inside of the Kubernetes pod with its internal IP address. To see it in a web browser, you’ll need to expose it as a service. To expose it as a service, run this command.

$ kubectl expose deployment hello-node --type=LoadBalancer

The type was specified as a LoadBalancer because Kubernetes will expose the IP outside of the cluster. If you were running a load balancer in a cloud environment, this how you’d provision an external IP address. However, in this case, it exposes your application as a service in Minikube. And now, finally, you get to see your application. Running this command will open a new browser window with your application.

$ minikube service hello-node

Minikube: Exposing Hello Minikube application in browser

Congratulations, you deployed your first containerized application via Kubernetes! But now, what if you need to our small Hello World application?

How do we push changes?

The time has come when you’re ready to make an update and push it. Edit your server.js file and change “Hello world!” to “Hello again, world!”

response.end('Hello again, world!');

And we’ll build another Docker image. Note the version bump.

$ docker build -t hello-node:v2 .

Next, you need to give Kubernetes the new image to deploy.

$ kubectl set image deployment/hello-node hello-node=hello-node:v2

And now, your update is pushed! Like before, run this command to have it open in a new browser window.

$ minikube service hello-node

If your application doesn’t come up any different, double-check that you updated the right image. You can troubleshoot by getting a shell into your pod by running the following command. You can get the pod name from the command run earlier (kubectl get pods). Once you’re in the shell, check if the server.js file shows your changes.

$ kubectl exec -it <pod-name> bash

Cleaning up

Now that we’re done, we can clean up the environment. To clear up the resources in your cluster, run these two commands.

$ kubectl delete service hello-node
$ kubectl delete deployment hello-node

If you’re done playing with Minikube, you can also stop it.

$ minikube stop

If you’re done using Minikube for a while, you can unset Minikube Docker daemon that we set earlier in this guide.

$ eval $(minikube docker-env -u)

Learn more about Kubernetes

You can find the original tutorial in the Kubernetes documentation. If you want to read more, there’s plenty of great information online. The documentation provided by Kubernetes is thorough and comprehensive.

Questions, Minikube stories, or tips for beginners? Add your comments below.

by Justin W. Flory at July 07, 2017 02:45 PM

xkcd.com

July 05, 2017

Fedora Magazine

Fedora 26 Workstation Wallpapers

The release of Fedora 26 is just around the corner, and the choices for wallpapers in Fedora Workstation are pretty amazing. In addition to the Fedora 26 Default Wallpaper and the GNOME 3.24 Adwaita background, Fedora Workstation now includes a new set of standard backgrounds.

Default Fedora 26 Wallpaper

Every release, the Fedora Design Team creates a default desktop background for Fedora. The Fedora 26 default is:

In addition to the static wallpaper, an ‘animated’ version is also available. This wallpaper transitions slowly throughout the day:

 

The animated variant of the default wallpaper is not installed by default on Fedora Workstation. To get it, use the command:

sudo dnf install f26-backgrounds-animated

New standard wallpapers

In the past, Fedora included the GNOME set of backgrounds by default in the Fedora Workstation backgrounds chooser. In Fedora 26, the following backgrounds are available by default:

After a fresh install of Fedora Workstation, you will no longer get the GNOME backgrounds by default (other than the Adwaita background). These backgrounds are still available in the repositories in the gnome-backgrounds-extras package. When doing an upgrade to Fedora 26 Workstation, you will get both the new Fedora set, and the GNOME set.

If some of the new standard wallpapers look familiar, that is likely due to the fact that they were all previously Supplemental Wallpapers in previous releases. Looking for even more wallpapers? Check out the Supplemental Wallpapers for Fedora 26, as well as ones for previous releases.

Adwaita Backgrounds

Fedora Workstation also ships with the Adwaita backgrounds that GNOME changes for each release. The standard Adwaita backgrounds for Fedora 26 Workstation (GNOME 3.24) are:

by Ryan Lerch at July 05, 2017 10:35 AM

xkcd.com

July 04, 2017

Fedora Magazine

Add power to your terminal with powerline

A while ago, Fedora Magazine posted this interview with Rackspace architect Major Hayden where he mentioned the powerline utility. If you often use a terminal, you too might find powerline useful. It gives you helpful status information, and helps you stay organized.

For the shell

By default, the shell plugin gives you plenty of helpful data:

  • Login name
  • Local time
  • Current working directory or path. The path is condensed automatically when it grows longer than the terminal width.
  • The number of active background jobs
  • The hostname, when you connect via SSH to a remote system where powerline is installed

This saves you a lot of twiddling with your shell environment and complex scripting! To install the utility, open a terminal and run this command:

sudo dnf install powerline powerline-fonts

The rest of these instructions assume you’re using Fedora’s standard bash shell. If you’re using a different shell, check out the documentation for tips.

Next, configure your bash shell to use powerline by default. Add the following snippet to your ~/.bashrc file:

if [ -f `which powerline-daemon` ]; then
  powerline-daemon -q
  POWERLINE_BASH_CONTINUATION=1
  POWERLINE_BASH_SELECT=1
  . /usr/share/powerline/bash/powerline.sh
fi

To activate the changes, open a new shell or terminal. You should have a terminal that looks like this:

Terminal with powerline running in the bash shell

Try changing directories. Watch how the “breadcrumb” prompt changes to show your current location. Very handy! You’ll also be able to see number of pending background jobs. And if powerline is installed on a remote system, the prompt includes the hostname when you connect via SSH.

For tmux

If you’re a command line junkie, you probably also know tmux. It allows you to split your terminal into many windows and panes, each containing its own session. But the tmux standard status line is not quite as interesting as what powerline provides by default:

  • Window information
  • System load
  • Time and date
  • Hostname, if you’re connected to a remote system via SSH

Therefore, let’s install the plugin:

sudo dnf install tmux-powerline

Now add this line to your ~/.tmux.conf file:

source "/usr/share/tmux/powerline.conf"

Next, remove or comment out any lines in your tmux configuration for status bar length or content. Examples of these settings are status-left, status-right, status-left-length, and status-right-length.

Your user configuration is stored in ~/.tmux.conf. If you don’t have one, copy an example from the web or /usr/share/tmux to ~/.tmux.conf, and then edit.

When you next start tmux, you should see the powerline status bar:

A tmux session with powerline running the status bar

For vim

If you use the vim editor, you’re also in luck. There’s a powerful plugin for vim, too. By default, it shows:

  • Operating mode (normal, insert, replace)
  • Current path and file name
  • Text encodings
  • Document and line positions

To install it, use this command:

sudo dnf install vim-powerline

Now add the following lines to your ~/.vimrc file:

python3 from powerline.vim import setup as powerline_setup
python3 powerline_setup()
python3 del powerline_setup
set laststatus=2 " Always display the statusline in all windows
set showtabline=2 " Always display the tabline, even if there is only one tab
set noshowmode " Hide the default mode text (e.g. -- INSERT -- below the statusline)
set t_Co=256

Now you can start vim and see a spiffy new status line:

Vim running with powerline status

Configuring powerline

No command line utility is complete without configuration options. The configuration in this case isn’t exactly simple, though; it requires you to edit JSON formatted files. But there’s a complete configuration guide available in the official documentation. And since the utility is written in Python, it’s eminently hackable.

When you hack the configuration, it’s usually to add, change, or remove segments. There are plenty of segments available, such as:

  • Content of environment variables
  • Version control system data (such as git branch and status!)
  • Weather
  • …and many more.

To change the status layout in an environment, you create or edit configuration files in your ~/.config/powerline/ folder. These configurations are stored as themes for each plugin. You can use the powerline-lint utility to check your configuration for parsing errors after making changes.

Some changes may require you to reload your session or possibly restart the daemon:

powerline-daemon --replace

Now you can enjoy more sophisticated status data in your favorite tools!

by Paul W. Frields at July 04, 2017 08:00 AM

July 03, 2017

xkcd.com

June 30, 2017

Fedora Magazine

Introduction to Kubernetes with Fedora

This article is part of a short series that introduces Kubernetes. This beginner-oriented series covers some higher level concepts and gives examples of using Kubernetes on Fedora.


The information technology world changes daily, and the demands of building scalable infrastructure become more important. Containers aren’t anything new these days, and have various uses and implementations. But what about building scalable, containerized applications? By itself, Docker and other tools don’t quite cut it, as far as building the infrastructure to support containers. How do you deploy, scale, and manage containerized applications in your infrastructure? This is where tools such as Kubernetes comes in. Kubernetes is an open source system that automates deployment, scaling, and management of containerized applications. Kubernetes was originally developed by Google before being donated to the Cloud Native Computing Foundation, a project of the Linux Foundation. This article gives a quick precursor to what Kubernetes is and what some of the buzzwords really mean.

What is Kubernetes?

Kubernetes simplifies and automates the process of deploying containerized applications at scale. Just like Ansible orchestrates software, Kubernetes orchestrates deploying infrastructure that supports the software. There are various “layers of the cake” that make Kubernetes a strong solution for building resilient infrastructure. It also assists with making systems that can grow at scale. If your application has increasing demands such as higher traffic, Kubernetes helps grow your environment to support increasing demands. This is one reason why Kubernetes is helpful for building long-term solutions for complex problems (even if it’s not complex… yet).

Kubernetes: The high level design

Kubernetes: The high level design. Daniel Smith, Robert Bailey, Kit Merker.

At a high level overview, imagine three different layers.

  • Users: People who deploy or create containerized applications to run in your infrastructure
  • Master(s): Manages and schedules your software across various other machines, for example in a clustered computing environment
  • Nodes: Various machines to support the application, called kubelets

These three layers are orchestrated and automated by Kubernetes. One of the key pieces of the master (not included in the visual) is etcd. etcd is a lightweight and distributed key/value store that holds configuration data. Each node, or kubelet, can access this data in etcd through a HTTP/JSON API interface. The components of communication between master and node such as etcd are explained in the official documentation.

Another important detail not shown in the diagram is that you might have many masters. In a high-availability (HA) set-up, you can keep your infrastructure resilient by having multiple masters in case one happens to go down.

Terminology

It’s important to understand the concepts of Kubernetes before you start to play around with it. There are many core concepts in Kubernetes, such as services, volumes, secrets, daemon sets, and jobs. However, this article explains four that are helpful for the next exercise of building a mini Kubernetes cluster. The four concepts are pods, labels, replica sets, and deployments.

Pods

If you imagine Kubernetes as a Lego® castle, pods are the smallest block you can pick out. By themselves, they are the smallest unit you can deploy. The containers of an application fit into a pod. The pod can be one container, but it can also be as many as needed. Containers in a pod are unique since they share the Linux namespace and aren’t isolated from each other. In a world before containers, this would be similar to running an application on the same host machine.

When the pods share the same namespace, all the containers in a pod:

  • Share an IP address
  • Share port space
  • Find each other over localhost
  • Communicate over IPC namespace
  • Have access to shared volumes

But what’s the point of having pods? The main purpose of pods is to have groups of “helping” containers on the same namespace (co-located) and integrated together (co-managed) along with the main application container. Some examples might be logging or monitoring tools that check the health of your application, or backup tools that act when certain data changes.

In the big picture, containers in a single pod are always scheduled together too. However, Kubernetes doesn’t automatically reschedule them to a new node if the node dies (more on this later).

Labels

Labels are a simple but important concept in Kubernetes. Labels are key/value pairs attached to objects in Kubernetes, like pods. They let you specify unique attributes of objects that actually mean something to humans. You can attach them when you create an object, and modify or add them later. Labels help you organize and select different sets of objects to interact with when performing actions inside of Kubernetes. For example, you can identify:

  • Software releases: Alpha, beta, stable
  • Environments: Development, production
  • Tiers: Front-end, back-end

Labels are as flexible as you need them to be, and this list isn’t comprehensive. Be creative when thinking of how to apply them.

Replica sets

Replica sets are where some of the magic begins to happen with automatic scheduling or rescheduling. Replica sets ensure that a number of pod instances (called replicas) are running at any moment. If your web application needs to constantly have four pods in the front-end and two in the back-end, the replica sets are your insurance that number is always maintained. This also makes Kubernetes great for scaling. If you need to scale up or down, change the number of replicas.

When reading about replica sets, you might also see replication controllers. They are somewhat interchangeable, but replication controllers are older, semi-deprecated, and less powerful than replica sets. The main difference is that sets work with more advanced set-based selectors — which goes back to labels. Ideally, you won’t have to worry about this much today.

Even though replica sets are where the scheduling magic happens to help make your infrastructure resilient, you won’t actually interact with them much. Replica sets are managed by deployments, so it’s unusual to directly create or manipulate replica sets. And guess what’s next?

Deployments

Deployments are another important concept inside of Kubernetes. Deployments are a declarative way to deploy and manage software. If you’re familiar with Ansible, you can compare deployments to the playbooks of Ansible. If you’re building your infrastructure out, you want to make sure it is easily reproducible without much manual work. Deployments are the way to do this.

Deployments offer functionality such as revision history, so it’s always easy to rollback changes if something doesn’t work out. They also manage any updates you push out to your application, and if something isn’t working, it will stop rolling out your update and revert back to the last working state. Deployments follow the mathematical property of idempotence, which means you define your specs once and use them many times to get the same result.

Deployments also get into imperative and declarative ways to build infrastructure, but this explanation is a quick, fly-by overview. You can read more detailed information in the official documentation.

Installing on Fedora

If you want to start playing with Kubernetes, install it and some useful tools from the Fedora repositories.

sudo dnf install kubernetes

This command provides the bare minimum needed to get started. You can also install other cool tools like cockpit-kubernetes (integration with Cockpit) and kubernetes-ansible (provisioning Kubernetes with Ansible playbooks and roles).

Learn more about Kubernetes

If you want to read more about Kubernetes or want to explore the concepts more, there’s plenty of great information online. The documentation provided by Kubernetes is fantastic, but there are also other helpful guides from DigitalOcean and Giant Swarm. The next article in the series will explore building a mini Kubernetes cluster on your own computer to see how it really works.

Questions, Kubernetes stories, or tips for beginners? Add your comments below.

by Justin W. Flory at June 30, 2017 09:42 PM

xkcd.com

June 28, 2017

Fedora黑

Fedora Magazine

Testing modules and containers with Modularity Testing Framework

Fedora Modularity is a project within Fedora with the goal of Building a modular operating system with multiple versions of components on different lifecycles. Fedora 26 features the first look of the modularity vision: the Fedora 26 Boltron Server. However,  if you are jumping into the modularity world, creating and deploying your own modules and containers — your next question may be how to test these artifacts.  The Modularity Testing Framework (MTF) has been designed for testing artifacts such as modules, RPM base repos, containers, and other artifact types. It helps you to write tests easily, and the tests are also independent of the type of the module.

MTF is a minimalistic library built on the existing avocado and behave testing frameworks. enabling developers to enable test automation for various module aspects and requirements quickly. MTF adds basic support and abstraction for testing various module artifact types: RPM based, docker images, ISOs, and more. For detailed information about the framework, and how to use it check out the MTF Documentation.

Installing MTF

The Modularity Testing Framework is available in the official Fedora repositories. Install MTF using the command:

dnf install -y modularity-testing-framework

A COPR is available if you want to use the untested, unstable version. Install via COPR with the commands:

dnf copr enable phracek/Modularity-testing-framework
dnf install -y modularity-testing-framework

Writing a simple test

Creating a testing directory structure

First, create the tests/ directory in the root directory of the module. In the tests/ directory, create a Makefile file:

MODULE_LINT=/usr/share/moduleframework/tools/modulelint/*.py
TESTS=*.py (try not to use “*.py”, but use the test files with names such as sanity1.py sanity2.py... separated by spaces)

CMD=python -m avocado run $(MODULE_LINT) $(TESTS)

#
all:
    generator  # use it in case that tests are defined also in config.yaml file (described below)
    $(CMD)

In the root directory of the module, create a Makefile file containing a section test. For example:

.PHONY: build run default

IMAGE_NAME = memcached

MODULEMDURL=file://memcached.yaml

default: run

build:
    docker build --tag=$(IMAGE_NAME) .

run: build
    docker run -d $(IMAGE_NAME)

test: build
    # used for testing docker image available on Docker Hub. Dockerfile 
    cd tests; MODULE=docker MODULEMD=$(MODULEMDURL) URL="docker.io/modularitycontainers/memcached" make all
    # used for testing docker image available on locally.
    # Dockerfile and relavant files has to be stored in root directory of the module.
    cd tests; MODULE=docker MODULEMD=$(MODULEMDURL) URL="docker=$(IMAGE_NAME)" make all
    # This tests "modules" on local system.
    cd tests; MODULE=rpm MODULEMD=$(MODULEMDURL) URL="https://kojipkgs.fedoraproject.org/compose/latest-Fedora-Modular-26/compose/Server/x86_64/os/" make all

In the tests/ directory, place the config.yaml configuration file for module testing(Do adresare tests umisti configuracni soubor config.yaml pro testovani modulu) . See minimal-config.yaml. For example:

document: modularity-testing
version: 1
name: memcached
modulemd-url: http://raw.githubusercontent.com/container-images/memcached/master/memcached.yaml
service:
    port: 11211
packages:
    rpms:
        - memcached
        - perl-Carp
testdependecies:
    rpms:
        - nc
module:
    docker:
        start: "docker run -it -e CACHE_SIZE=128 -p 11211:11211"
        labels:
            description: "memcached is a high-performance, distributed memory"
            io.k8s.description: "memcached is a high-performance, distributed memory"
        source: https://github.com/container-images/memcached.git
        container: docker.io/modularitycontainers/memcached
    rpm:
        start: /usr/bin/memcached -p 11211 &
        repo:
           - https://kojipkgs.fedoraproject.org/compose/latest-Fedora-Modular-26/compose/Server/x86_64/os/

test:
    processrunning:
        - 'ls  /proc/*/exe -alh | grep memcached'
testhost:
    selfcheck:
        - 'echo errr | nc localhost 11211'
        - 'echo set AAA 0 4 2 | nc localhost 11211'
        - 'echo get AAA | nc localhost 11211'
    selcheckError:
        - 'echo errr | nc localhost 11211 |grep ERROR'

 

Add the simpleTest.py python file, which tests a service or an application, into the tests/ directory:

#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# This Modularity Testing Framework helps you to write tests for modules
# Copyright (C) 2017 Red Hat, Inc.

import socket
from avocado import main
from avocado.core import exceptions
from moduleframework import module_framework


class SanityCheck1(module_framework.AvocadoTest):
    """
    :avocado: enable
    """

    def testSettingTestVariable(self):
        self.start()
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        s.connect(('localhost', self.getConfig()['service']['port']))
        s.sendall('set Test 0 100 4\r\n\n')
        #data = s.recv(1024)
        # print data

        s.sendall('get Test\r\n')
        #data = s.recv(1024)
        # print data
        s.close()

    def testBinExistsInRootDir(self):
        self.start()
        self.run("ls / | grep bin")

    def test3GccSkipped(self):
        module_framework.skipTestIf("gcc" not in self.getActualProfile())
        self.start()
        self.run("gcc -v")

if __name__ == '__main__':
    main()

Running tests

To execute tests from the root directory of the module, type

# run tests from a module root directory
$ sudo make test

The result looks like:

docker build --tag=memcached .
Sending build context to Docker daemon 268.3 kB
Step 1 : FROM baseruntime/baseruntime:latest
---> 0cbcd55844e4
Step 2 : ENV NAME memcached ARCH x86_64
---> Using cache
---> 16edc6a5f7b6
Step 3 : LABEL MAINTAINER "Petr Hracek" <[email protected]>
---> Using cache
---> 693d322beab2
Step 4 : LABEL summary "High Performance, Distributed Memory Object Cache" name "$FGC/$NAME" version "0" release "1.$DISTTAG" architecture "$ARCH" com.redhat.component $NAME usage "docker run -p 11211:11211 f26/memcached" help "Runs memcached, which listens on port 11211. No dependencies. See Help File below for more details." description "memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load." io.k8s.description "memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load." io.k8s.diplay-name "Memcached 1.4 " io.openshift.expose-services "11211:memcached" io.openshift.tags "memcached"
---> Using cache
---> eea936c1ae23
Step 5 : COPY repos/* /etc/yum.repos.d/
---> Using cache
---> 920155da88d9
Step 6 : RUN microdnf --nodocs --enablerepo memcached install memcached &&     microdnf -y clean all
---> Using cache
---> c83e613f0806
Step 7 : ADD files /files
---> Using cache
---> 7ec5f42c0064
Step 8 : ADD help.md README.md /
---> Using cache
---> 34702988730f
Step 9 : EXPOSE 11211
---> Using cache
---> 577ef9f0d784
Step 10 : USER 1000
---> Using cache
---> 671ac91ec4e5
Step 11 : CMD /files/memcached.sh
---> Using cache
---> 9c933477acc1
Successfully built 9c933477acc1
cd tests; MODULE=docker MODULEMD=file://memcached.yaml URL="docker=memcached" make all
make[1]: Entering directory '/home/phracek/work/FedoraModules/memcached/tests'
Added test (runmethod: run): processrunning
Added test (runmethod: runHost): selfcheck
Added test (runmethod: runHost): selcheckError
python -m avocado run --filter-by-tags=-WIP /usr/share/moduleframework/tools/modulelint.py *.py
JOB ID     : 9ba3a3f9fd982ea087f4d4de6708b88cee15cbab
JOB LOG    : /root/avocado/job-results/job-2017-06-14T16.25-9ba3a3f/job.log
(01/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testDockerFromBaseruntime: PASS (1.52 s)
(02/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testDockerRunMicrodnf: PASS (1.53 s)
(03/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testArchitectureInEnvAndLabelExists: PASS (1.63 s)
(04/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testNameInEnvAndLabelExists: PASS (1.61 s)
(05/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testReleaseLabelExists: PASS (1.60 s)
(06/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testVersionLabelExists: PASS (1.45 s)
(07/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testComRedHatComponentLabelExists: PASS (1.64 s)
(08/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testIok8sDescriptionExists: PASS (1.51 s)
(09/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testIoOpenshiftExposeServicesExists: PASS (1.50 s)
(10/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testIoOpenShiftTagsExists: PASS (1.53 s)
(11/20) /usr/share/moduleframework/tools/modulelint.py:DockerLint.testBasic: PASS (13.75 s)
(12/20) /usr/share/moduleframework/tools/modulelint.py:DockerLint.testContainerIsRunning: PASS (14.19 s)
(13/20) /usr/share/moduleframework/tools/modulelint.py:DockerLint.testLabels: PASS (1.57 s)
(14/20) /usr/share/moduleframework/tools/modulelint.py:ModuleLintPackagesCheck.test: PASS (14.03 s)
(15/20) generated.py:GeneratedTestsConfig.test_processrunning: PASS (13.77 s)
(16/20) generated.py:GeneratedTestsConfig.test_selfcheck: PASS (13.85 s)
(17/20) generated.py:GeneratedTestsConfig.test_selcheckError: PASS (14.32 s)
(18/20) sanity1.py:SanityCheck1.testSettingTestVariable: PASS (13.86 s)
(19/20) sanity1.py:SanityCheck1.testBinExistsInRootDir: PASS (13.81 s)
(20/20) sanity1.py:SanityCheck1.test3GccSkipped: ERROR (13.84 s)
RESULTS    : PASS 19 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME   : 144.85 s
JOB HTML   : /root/avocado/job-results/job-2017-06-14T16.25-9ba3a3f/html/results.html
Makefile:6: recipe for target 'all' failed
make[1]: *** [all] Error 1
Makefile:14: recipe for target 'test' failed
make: *** [test] Error 2
$

To execute tests from the tests/ directory, type:

# run Python tests from the tests/ directory
$ sudo MODULE=docker avocado run ./*.py

The result looks like:

$ sudo MODULE=docker avocado run ./*.py
[sudo] password for phracek:
JOB ID     : 2a171b762d8ab2c610a89862a88c015588823d29
JOB LOG    : /root/avocado/job-results/job-2017-06-14T16.43-2a171b7/job.log
(1/6) ./generated.py:GeneratedTestsConfig.test_processrunning: PASS (24.79 s)
(2/6) ./generated.py:GeneratedTestsConfig.test_selfcheck: PASS (18.18 s)
(3/6) ./generated.py:GeneratedTestsConfig.test_selcheckError: ERROR (24.16 s)
(4/6) ./sanity1.py:SanityCheck1.testSettingTestVariable: PASS (18.88 s)
(5/6) ./sanity1.py:SanityCheck1.testBinExistsInRootDir: PASS (17.87 s)
(6/6) ./sanity1.py:SanityCheck1.test3GccSkipped: ERROR (19.30 s)
RESULTS    : PASS 4 | ERROR 2 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME   : 124.19 s
JOB HTML   : /root/avocado/job-results/job-2017-06-14T16.43-2a171b7/html/results.html

by Petr Hracek at June 28, 2017 10:50 AM

xkcd.com

June 26, 2017

Fedora Magazine

Upcoming Fedora Atomic Host lifecycle changes

The Fedora Project ships new Fedora Server and Workstation releases at roughly six-month intervals. It then maintains each release for around thirteen months. So Fedora N is supported by the community until one month after the release of Fedora N+2. Since the first Fedora Atomic Host shipped, as part of Fedora 21, the project has maintained separate ostree repositories for both active Fedora releases. For instance, there are currently trees available for Fedora Atomic 25 and Fedora Atomic 24.

Fedora Atomic sets out to be a particularly fast-moving branch of Fedora. It provides releases every two weeks and updates to key Atomic Host components such as Docker and Kubernetes. The release moves more quickly than one might expect from the other releases of Fedora.

Due in part to this faster pace, the Fedora Atomic Working Group has always focused its testing and integration efforts most directly on the latest stable release. The group encourages users of the older release to rebase to the newer tree as soon as possible. Releases older than the current tree are supported only on a best effort basis. This means the ostree is updated, but there is no organized testing of older releases.

Upcoming changes

This will change with either the Fedora 26 to 27 or the 27 to 28 upgrade cycle (depending on readiness). The Fedora Atomic Working Group will then collapse Fedora Atomic into a single version. That release will track the latest stable Fedora branch. When a new stable version of Fedora is released, Fedora Atomic users will automatically shift to the new version when they install updates.

Traditional OS upgrades can be disruptive and error-prone. Due to the image-based technologies that Atomic Hosts use for system components (rpm-ostree) and for applications (Linux containers), upgrading an Atomic Host between major releases is like installing updates within a single release. In both scenarios, the system updates are applied by running an rpm-ostree command and rebooting. The release provides rollback to the previous state available in case something goes wrong. Applications running in containers are unaffected by the host upgrade or update.

If you’d like to get involved in the Fedora Atomic Working Group, come talk to us in IRC in #fedora-cloud or #atomic on Freenode, or join the Atomic WG on Pagure.

by jasonbrooks at June 26, 2017 08:00 AM

xkcd.com

June 23, 2017

Fedora Magazine

gThumb: View and manage your photos in Fedora

Fedora uses Eye of GNOME to display images, but it’s a very basic program. Out of the box, Fedora doesn’t have a great tool for managing photos. If you’re familiar with the Fedora Workstation’s desktop environment, GNOME, then you may be familiar with GNOME Photos. This is a young app available in GNOME Software that seeks to make managing photos a painless task. You may not know that there’s a more robust tool out there that packs more features and looks just as at home on Fedora. It’s called gThumb.

What is gThumb?

gThumb is hardly a new piece of software. The program has been around since 2001, though it looks very different now than it did back then. As GNOME has changed, so has gThumb. Today it’s the most feature-rich way of managing images using a GNOME 3 style interface.

While gThumb is an image viewer that you can use to replace Eye of GNOME, that’s only the beginning of what it can do. Thanks to the inclusion of features you would normally find in photo managers like digiKam or the now discontinued Picasa, I use it to view the pictures I capture with my DSLR camera.

How gThumb handles photos

At its core, gThumb is an image viewer. While it can organize your collection, its primary function is to display pictures in the folders they’re already in. It doesn’t move them around. I consider this a plus.

I download images from my camera using Rapid Photo Downloader, which organized and renames files precisely as I want them. All I want from a photo manager is the ability to easily view these images without much fuss.

That’s not to say that gThumb doesn’t offer any of the extra organizational tools you may expect from a photo manager. It comes with a few.

Labeling, grouping, and organizing

Determining your photo’s physical location on your hard drive is only one of many ways to keep up with your images. Once your collection grows, you may want to use tags. These are keywords that can help you mark and recall pictures of a certain type, such as birthdays, visits to the park, and sporting events. To remember details about a specific picture, you can leave a comment.

gThumb lets you save photos to one of three collections, indicated by three flags in the bottom right corner. These groups are color coordinated, with the options being green, red, and blue. It’s up to you to remember which collections correspond with what color.

Alternatively, you can let gThumb organize your images into catalogs. Catalogs can be based on the date images were taken, the date they were edited, or by tags.

It’s also an image editor

gThumb provides enough editing functions to meet most of my needs. It can crop photos, rotate them, and adjust aspects such as contrast, lightness, and saturation. It can also remove red-eye. I still fire up the GIMP whenever I need to do any serious editing, but gThumb is a much faster way of handing the basics.

gThumb is maintained by the GNOME Project, just like Eye of GNOME and GNOME Photos. Each offers a different degree of functionality. Before you walk away thinking that GNOME’s integrated photo viewers are all too basic, give gThumb a try. It has become my favorite photo manager for Linux.

by Bertel King at June 23, 2017 08:00 AM

xkcd.com

June 21, 2017

Fedora Magazine

Controlling Windows via Ansible

For many Linux systems engineers, Ansible has become a way of life. They use Ansible to orchestrate complex deployment processes, to define multiple systems with a quick and simple configuration management tool, or somewhere in between.

However, Microsoft Windows users have generally required a different set of tools to manage systems. They also often needed a different mindset on how to handle them.

Recently Ansible has improved this situation quite a bit. The Ansible 2.3 release included a bunch of new modules for this purpose. Ansible 2.3.1 is already available for Fedora 26.

At AnsibleFest London 2017, Matt Davis, Senior Principal Software Engineer at Ansible, will lead a session covering this topic in some detail. In this article we look at how to prepare Windows systems to enable this functionality along with a few things we can do with it.

Preparing the target systems

There’s a couple of prerequisites that are required to prepare a Windows system to allow ansible to connect. The connection type used for this is “winrm” which is the Windows Remote Management protocol.

When using this ansible executes powershell on the target system. This requires a minimal version of Powershell 3.0 although it’s recommended to install the most recent version of Windows Management Framework, which at the time of writing is 5.1 and includes Powershell 5.1 as part of it.

With that in place the WinRM service needs to be configured on the Windows system. The easiest way to do this is with the ansible powershell script.

By default commands, but not scripts, can be executed by the ExecutionPolicy. However when running the script via the powershell executable this can be bypassed.

Run powershell as an an administrative user and then:

powershell.exe  -ExecutionPolicy Bypass -File ConfigureRemotingForAnsible.ps1 -CertValidityDays 3650 -Verbose

After this the WinRM service will be listening and any user with administrative privileges will be able to authenticate and connect.

Although it’s possible to use CredSSP or Kerberos for delegated (single sign-on) the simplest method just makes use of username and password via NTLM authentication.

To configure the winrm connector itself there’s a few different variables but the bare minimum to make this work for any Windows system will need:

ansible_user: 'localAdminUser'
ansible_password: 'P455w0rd'
ansible_connection: 'winrm'
ansible_winrm_server_cert_validation: 'ignore'

The last line is important with the default self-signed certificates that Windows uses for WinRM, but can be removed if using verified certificates from a central CA for the systems.

So with that in place how flexible is it? How much can really be remotely controlled and configured?

Well step one on the controlling computer is to install ansible and the winrm libraries:

dnf -y install ansible python2-winrm

With that ready there’s a fair number of the core modules avaliable but the majority of tasks are from windows specific modules.

Remote windows updates

Using ansible to define your Windows systems updates allows them to be remotely checked and deployed, whether they come directly from Microsoft or from an internal Windows Server Update Service:

ansible -i mywindowssystem, -c winrm -e ansible_winrm_server_cert_validation=ignore -u administrator -k -m win_updates

mywindowssytem | SUCCESS => {
    "changed": true, 
    "failed_update_count": 0, 
    "found_update_count": 3, 
    "installed_update_count": 3, 
    "reboot_required": true, 
    "updates": {
        "488ad51b-afca-46b9-b0de-bdbb4f56672f": {
            "id": "488ad51b-afca-46b9-b0de-bdbb4f56672f", 
            "installed": true, 
            "kb": [
                "4022726"
            ], 
            "title": "2017-06 Security Monthly Quality Rollup for Windows 8.1 for x64-based Systems (KB4022726)"
        }, 
        "94e2e9ab-e2f7-4f8c-9ade-602a0511cc08": {
            "id": "94e2e9ab-e2f7-4f8c-9ade-602a0511cc08", 
            "installed": true, 
            "kb": [
                "4022730"
            ], 
            "title": "2017-06 Security Update for Adobe Flash Player for Windows 8.1 for x64-based Systems (KB4022730)"
        }, 
        "ade56166-6d55-45a5-9e31-0fac924e4bbe": {
            "id": "ade56166-6d55-45a5-9e31-0fac924e4bbe", 
            "installed": true, 
            "kb": [
                "890830"
            ], 
            "title": "Windows Malicious Software Removal Tool for Windows 8, 8.1, 10 and Windows Server 2012, 2012 R2, 2016 x64 Edition - June 2017 (KB890830)"
        }
    }
}

Rebooting automatically is also possible with a small playbook:

- hosts: windows
  tasks:
    - name: apply critical and security windows updates 
      win_updates:
        category_names: 
          - SecurityUpdates
          - CriticalUpdates
      register: wuout
    - name: reboot if required
      win_reboot:
      when: wuout.reboot_required

Package management

There’s two ways to handle package installs on windows using ansible.

The first is to use win_package which can install any msi or run an executable installer from a network share or uri. This is useful for more locked down internal networks with no internet connectivity or for applications not on Chocolatey. In order to avoid re-running an installer and keep any plays safe to run it’s important to lookup the product ID from the registry so that win_package can detect if it’s already installed.

The second is to use the briefly referenced Chocolatey. There is no setup required for this on the target system as the win_chocolatey module will automatically install the Chocolatey package manager if it’s not already present. To install the Java 8 Runtime Environment via Chocolatey it’s as simple as:

ansible -i mywindowssystem, -c winrm -e ansible_winrm_server_cert_validation=ignore -u administrator -k -m win_chocolatey -a "name=jre8"
mywindowssystem | SUCCESS => {
    "changed": true, 
    "rc": 0
}

And the rest…

The list is growing as ansible development continues so always check the documentation for the up-to-date set of windows modules supported. Of course it’s always possible to just execute raw powershell as well:

ansible -i mywindowssystem, -c winrm -e ansible_winrm_server_cert_validation=ignore -u administrator -k -m win_shell -a "Get-Process"
mywindowssystem | SUCCESS | rc=0 >>

Handles  NPM(K)    PM(K)      WS(K)     CPU(s)     Id  SI ProcessName          
-------  ------    -----      -----     ------     --  -- -----------          
     28       4     2136       2740       0.00   2452   0 cmd                  
     40       5     1024       3032       0.00   2172   0 conhost              
    522      13     2264       5204       0.77    356   0 csrss                
     83       8     1724       3788       0.20    392   1 csrss                
    106       8     1936       5928       0.02   2516   0 dllhost              
     84       9     1412        528       0.02   1804   0 GoogleCrashHandler   
     77       7     1448        324       0.03   1968   0 GoogleCrashHandler64 
      0       0        0         24                 0   0 Idle                 
      

With the collection of modules already available and the help of utilities like Chocolatey it’s already possible to manage the vast majority of the Windows estate with ansible allowing many of the same techniques and best practices already embedded in the Linux culture to make the transition over the fence, even with more complex actions such as joining or creating an Active Directory domain.

ansible -i mywindowssystem, -c winrm -e ansible_winrm_server_cert_validation=ignore -u administrator -k -m win_say -a "msg='I love my ansible, and it loves me'"

by James Hogarth at June 21, 2017 08:00 AM

xkcd.com

June 20, 2017

Fedora Magazine

Run OpenShift Locally with Minishift

OpenShift Origin is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OpenShift adds developer and free operations-centric tools on top of Kubernetes. This helps small and large teams rapidly develop applications, scale and deploy easily, and maintain an app throughout a long-term lifecycle. Minishift helps you run OpenShift locally by running a single-node OpenShift cluster inside a VM. With Minishift, you can try out OpenShift or develop with it daily on your local host. Under the hood it uses libmachine for provisioning VMs, and OpenShift Origin for running the cluster.

Installing and Using Minishift

Prerequisites

Minishift requires a hypervisor to start the virtual machine on which the OpenShift cluster is provisioned. Make sure KVM is installed and enabled on your system before you start Minishift on Fedora.

First, install libvirt and qemu-kvm on your system.

sudo dnf install libvirt qemu-kvm

Then, add yourself to the libvirt group to avoid sudo.

sudo usermod -a -G libvirt <username>

Update your current session for the group change to take effect.

newgrp libvirt

Next, start and enable libvirtd and virlogd services.

systemctl start virtlogd
systemctl enable virtlogd systemctl start libvirtd systemctl enable libvirtd

Finally, install the docker-machine-kvm driver binary to provision a VM. Then make it executable. The instructions below are using version 0.7.0.

sudo curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.7.0/docker-machine-driver-kvm -o /usr/local/bin/docker-machine-driver-kvm
sudo chmod +x /usr/local/bin/docker-machine-driver-kvm

Installation

Download the archive for your operating system from the releases page and unpack it. At the time of this writing, the latest version is 1.1.0:

wget https://github.com/minishift/minishift/releases/download/v1.1.0/minishift-1.1.0-linux-amd64.tgz
tar -xvf minishift-1.1.0-linux-amd64.tgz

Copy the contents of the directory to your preferred location.

cp minishift ~/bin/minishift

If your personal ~/bin folder is not in your PATH environment variable already (use echo $PATH to check), add it:

export PATH=~/bin:$PATH

Get started

Run the following command. The output will look similar to below:

$ minishift start
Starting local OpenShift cluster using 'kvm' hypervisor...
...
OpenShift server started.
The server is accessible via web console at:
https://192.168.99.128:8443

You are logged in as:
User:     developer
Password: developer

To login to your Minishift installation as administrator:
oc login -u system:admin

This process performs the following steps:

  • Downloads the latest ISO image based on boot2docker (~40 MB)
  • Starts a VM using libmachine
  • Downloads OpenShift client binary (oc)
  • Caches both oc and the ISO image into your $HOME/.minishift/cache folder
  • Finally, provisions OpenShift single node cluster in your workstation

Now, use minishift oc-env to display the command to add the oc binary to your PATH. The output of oc-env differs depending on the operating system and shell.

$ minishift oc-env
export PATH="/home/john/.minishift/cache/oc/v1.5.0:$PATH"
# Run this command to configure your shell:
# eval $(minishift oc-env)

Deploying an application

OpenShift provides various sample applications, such as templates, builder applications, and quickstarts. The following steps deploy a sample Node.js application from the command line.

First, create a Node.js example app.

oc new-app https://github.com/openshift/nodejs-ex -l name=myapp

Then, track the build log until the app is built and deployed.

oc logs -f bc/nodejs-ex

Next, expose a route to the service.

oc expose svc/nodejs-ex

Now access the application.

minishift openshift service nodejs-ex -n myproject

To stop the service, use the following command:

minishift stop

Refer to the official documentation for getting started with a single node OpenShift cluster.

Feedback

We’d love to get your feedback. If you hit a problem, please raise an issue in the issue tracker. Please search through the listed issues, though, before creating a new one. It’s possible a similar issue is already open.

Community

The community hangs out on the IRC channel #minishift on Freenode (https://freenode.net). You’re welcome to join, participate in the discussions, and contribute.

Resources

by Kumar Praveen at June 20, 2017 08:00 AM

June 19, 2017

xkcd.com

June 16, 2017

Fedora Magazine

Ben Hart: How Do You Fedora?

We recently interviewed Ben Hart on how he uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming a interviewee.

Who is Ben Hart?

Ben Hart is an information technology professional with over 19 years of experience. His first experience with Linux was with Mandrake on an old Compaq laptop. “Admittedly I did not keep it long, due to how well it didn’t want to work for me and supporting a full Windows environment made things difficult.”

One of his childhood heroes is Charles Babbage, the inventor of the first mechanical computer. Now Ben works as a Linux Systems Administrator. “I am a Linux SysAdmin working for Montana Interactive and I help manage over 60 servers that host over 300 web applications in the best state in the lower 48, Montana.”

Hart started his career in Alabama as phone tech support. He then spent several years as a technician for computer and network systems in the area of Arkansas and Mississippi. His winding trail finally landed him in Montana. “I again got hideously lucky and scored a gig here in Montana. So in addition to another life change in moving across the country I also scored a career change: Windows admin to Linux Admin.”

Ben loves the state of Montana and enjoys the great outdoors. “Since moving to Montana I hike, exploring the wilderness before it’s all gone is my primary, off-duty passion. But I also wood-work, perform shade-tree mechanicin’, hunt, fish, and enjoy my livestock.”

Ben Hart Livestock

When I asked Ben for more information about his livestock he talked about having both chickens and ducks. He claimed that, despite yard fowl reportedly having the intelligence of a three year old, they make amazingly good pets. Hart said, “not only do they keep the bug and weed population down but they give you fertilizer and food!”  He continued, “plus my oldest daughter is special needs and absolutely loves them, ducks especially calm her and is another reason to get outdoors.”

Ben is very new to using Fedora, but has already gotten involved in the community. “As a primary OS I’ve only been using Fedora for about 3 months now.  I dual-booted for about two weeks before the switch and just recently deleted my windows partitions. He struggled to find a decent Google Drive client, but found that he could use SpiderOak instead. Ben made the switch to Fedora because he was frustrated by blue screens of death, poor security and he just wanted something new.

The Fedora Community

Late one night roughly three months ago, Hart was doing some research to resolve frequent blue screen issues on his Windows 10 laptop when he realized it was time to make the switch. “I support about a 80/20 Linux over Windows environment now. Why do I continue to put up with this?” That same night he made the decision to help out in the Fedora community whenever and wherever he could.

How busy people in the Fedora community were was one of the first things that struck Ben. “The more senior folks seem very busy, but always willing to help. They are also very, very knowledgeable.” He feels more people might make the switch to Fedora if they knew how dedicated and seriously people in the Fedora community take their jobs.

“Since I can’t code my way out of a wet paper sack, and hardware interests me I joined the Infra-Apprentice group.” Recently a couple of the SOPs he created were committed upstream. Hart hopes to become involved with the automation sides of things and pick up some Python programming skills as well.

What Hardware and Software?

All of the servers Ben supports at work run CentOS. Hart told me, “my laptop is a Dell XPS 15 9550 running F25. 16GB RAM, a 500GB NVME SSD and a WD15 Thunderbolt dock powering two 34-inch ASUS LCDs.” Additionally, he uses an Logitech MX510 mouse and a Razer BlackWidow Ultimate mechanical keyboard. The only issue he has is that at times Plasma has issues with the two external monitors attached via the dock.

Ben Hart Desktop

What Software?

“I use Fedora 25, Slack, Evolution, KeePass, Konversation, Chrome, Konsole, KVM and the SpiderOak client daily.” Typically Ben just stays in the terminal for making text or configuration edits unless dealing with a huge file.  For huge files he uses cat to move it over to Sublime.  For visualizing git repos he makes use of gitKraken.  “I lucked into a Konsole theme while checking out /r/unixporn called bullet-train.”  He says it is absolutely, amazingly useful. “It has prompt changes for git, Perl, Python, Ruby and a lot more.  Plus it’s visually appealing.” Ben makes use of cowsay and fortune. Hart also makes use of pianobar for a CLI interface to Pandora.

Hart expanded on his use of SpiderOak by telling me he uses it exactly the same way he used Google Drive, but with additional features. “They are just a cloud storage provider, but in addition a zero-knowledge provider.  Data is deduped, compressed and encrypted.” While he noted that uploading takes a while his opinion was that it was worth it. “I use SpiderOak to store pictures from my android phone, storing Powershell and Bash scripts that I write, and pretty much anything I don’t want to lose.  I use it’s locally cached folder as my Documents, Work Stuff, Projects, and Desktop folders all in one.”

by Charles Profitt at June 16, 2017 08:00 AM

xkcd.com

June 15, 2017

Fedora Magazine

PSA: Errors after updating libdb

This is an important public service announcement for Fedora 24, 25 and 26 (pre-release) users, courtesy of the Fedora QA team.

The short version

If you recently updated and got some kind of error or crash, and now you’re getting RPM database errors, try this command to fix it (you can use sudo for root privileges):

sudo rpm --rebuilddb

If that’s not enough, try:

sudo rm -f /var/lib/rpm/__db*
sudo rpm --rebuilddb

Now all should be well again. We do apologize for this. Note that if you’re unlucky and have the updates-testing repository enabled, there’s a chance you may run into issues with more than one update: the same recovery steps should work each time.

The longer version

There’s a rather subtle and tricky bug in libdb (the database that RPM uses) which has been causing problems with upgrades from Fedora 24/25 to Fedora 26. The developers have made a few attempts to fix this, and testing this week had indicated that the most recent attempt — libdb-5.3.28-21 — was working well. We believed the fix needed to be applied both on the ‘from’ and the ‘to’ end of any affected transaction, so we went ahead and sent the -21 update out to Fedora 24, 25 and 26.

Unfortunately it turns out that updating to -21 along with other packages can possibly result in a crash at the very end of the process, which in turn causes a (as it happens, minor and fully recoverable) problem in the RPM database.

We initially sent a -22 update to updates-testing which reverts the changes from -21, but then found out that updating from -21 to -22 causes a similar result. At that point we decided it was probably best just to cut our losses, stick with -21, and document the issues for anyone who encountered them, and we have removed the -22 update from updates-testing again. There will be a -23 build soon which restores the fixes from -21 and adds another upgrade-related fix.

So if you’re affected by RPM database issues with any libdb update, just doing the old “rebuild the RPM database” trick will resolve the problem:

sudo rm -f /var/lib/rpm/__db*
sudo rpm --rebuilddb

And if you did wind up in -22, you may want to go back to -21. To do this, run the following command:

sudo dnf --refresh distro-sync

Note that you may need to do the rebuilddb step after downgrading from -22 to -21, or after upgrading from -22 to -23 when -23 arrives.

It’s unfortunate that we have to break that one out of cold storage, but it should at least get you back up and working for now. We do apologize sincerely for this mess, and we’ll try and do all we can to fix it up ASAP.

by Adam Williamson at June 15, 2017 11:48 PM

Fedora 26 Atomic/Cloud Test Day June 20th

Now that the Fedora Beta has been officially released the Fedora Atomic Working Group and Fedora Cloud SIG would like to get the community together next week to find and squash some bugs. We are organizing a test day for Tuesday, June 20th.

For this event we’ll test both Atomic Host content and Fedora Cloud Base content. Vagrant Boxes will be available to test with as well. See the Fedora Atomic Host Pre-Release Page for links to artifacts for Fedora Atomic Host and the Alternative Downloads Beta Page for links for the Beta Cloud Base Images. We have qcow, AMI, and ISO images ready for testing.

How do test days work?

A test day is an event where anyone can help make sure that changes in Fedora are working well in the upcoming release. Fedora community members often participate, but the public is welcome also. You only need to be able to download materials (including some large files), and read and follow technical directions step by step, to contribute.

The wiki page for the Atomic/Cloud test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day app. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.

by Dusty Mabe at June 15, 2017 01:45 PM

June 14, 2017

Fedora Magazine

Quick tip: calculator in the Fedora Workstation overview

Most Fedora users are probably aware that Fedora Workstation ships with a basic calculator application. This small app has pretty much always been a part of the Fedora desktop. However, did you know that the Calculator app in Fedora Workstation also has a built-in search provider allowing you to perform quick calculations directly in the overview? Great for quick, on-off calculations without having to launch a separate app.

Using the Overview calculator

To use the Search provider of the Calculator app, first open the overview by either clicking Activities in the top left of your desktop or pressing the Meta key on your keyboard. Type the calculation you wish to perform in the search box, and the results will appear:

Additionally, the predefined functions in the app can be used in the Overview as well. This allows more complex calculations using functions like Square Root (sqrt), Sine (sin), Cosine (cos),  Tangent (tan), Natural Logarithm (ln), and Logarithm (log):

Enabling or Disabling the search provider

If the calculator results aren’t showing in the overview, it is likely that the search provider is disabled. Enabling or disabling the search provider is in the Search settings section in the main settings for Fedora Workstation:

 

by Ryan Lerch at June 14, 2017 10:03 AM

xkcd.com

June 13, 2017

Fedora Magazine

Announcing the Release of Fedora 26 Beta

The Fedora Project is pleased to announce the immediate availability of Fedora 26 Beta, the next big step on our journey to the exciting Fedora 26 release in July.
Download the prerelease from our Get Fedora site:

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:

Looking for Fedora Cloud Base? This is replaced by Atomic as a Fedora Edition for the container use case, but we still produce it for those of you who want to build up your own cloud-computing environment in a more traditional way — see Alternative Downloads below.

Fedora’s journey is not simply about updating one operating system with the latest and greatest packages. It’s also about innovation for the many different platforms represented in the Fedora Project: Workstation, Server, Atomic, and the various Spins. Coordinating the efforts across the many working groups is no small task, and serves as a testament to the talent and professionalism found within the Fedora community.

As we move into this Beta phase of the Fedora 26 release cycle, what can users expect?

Fedora-Wide Changes

Fedora, always in the path of innovation, will ship with the latest version of the GNU Compiler Collection, also known as GCC, bringing the latest language features and optimizations to users and to the software we build. Also the Go Language is updated to the latest version, 1.8, which includes 32-bits MIPS support and speed
improvements.

One of the most important changes is the addition of “blivet-gui” to the installer. This provides a “building-blocks” style partitioning GUI for sysadmins and enthusiast users who are familiar with the details of storage systems.

More traditional partitioning interface able to handle complex configurations

Also, we’ve made and included many improvements in security, improving user experience and reducing the risks of the digital life.

 

Fedora Editions

The Workstation edition of Fedora 26 Beta features GNOME 3.24, which includes important changes like Night Light, which changes the color temperature of the display based on time of day. It also includes the latest update of LibreOffice.

Updated Fedora Media Writer able to search and downloads Spins directly

Our Atomic Host Edition also has many improvements, including more options to run  containers, the latest version of the docker container platform, the cockpit manager and the atomic CLI, improving the way containers are managed, making being a sysadmin easier.

Spins and Labs

The Fedora Project is proud to announce two new versions: The LXQt Spin, a lightweight desktop supporting the latest version of the Qt libraries; and the Python Classroom Lab, a new version focused in the teaching and learning of the Python programming language. And, in the Cinnamon Spin, the desktop is updated to the latest version.

Alternative Architectures and Other Downloads

We are also simultaneously releasing 64-bit F26 Beta for ARM (AArch64), Power (both little and big endian) and s390x architectures. You’ll also find minimal network installers and the Fedora 26 Beta Cloud Base image here:

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the third and final release. The final release of Fedora 26 is expected in July. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can, and your feedback improves not only Fedora, but Linux and Free software as a whole.

Issues and Details

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in #fedora-qa on Freenode. As testing progresses, common issues are tracked on the Common F26 Bugs page.

For tips on reporting a bug effectively, read how to file a bug report.

More information

For more detailed information about what’s new on Fedora 26 Beta Release, you can consult our Talking Points and the F26 Change Set. They contain more technical information about the new packages and improvements shipped with this release.

by Eduard Lucena at June 13, 2017 02:00 PM

June 12, 2017

xkcd.com

June 09, 2017

Fedora Magazine

Alternate Tab GNOME shell extension

GNOME, the Fedora Workstation default environment, has a well known Alt+Tab feature to switch apps. This control groups windows for a single app together. For example, multiple terminal windows appear as a single terminal app. The Alt+` (backtick or backquote) shortcut switches between those windows in a single app. But a helpful GNOME extension, Alternate Tab, changes this behavior.

GNOME supports a Classic mode of operation. This mode allows GNOME version 3 to behave similarly to the previous version 2. The Classic mode groups a number of extensions together to achieve this goal. These extensions include the Alternate Tab extension.

The extension switches behavior of the Alt+Tab display. With this extension, each window appears separately. Some users find navigation easier with this behavior in place.

Installing the Alternate Tab extension

Fedora Workstation includes the Classic mode by default. That includes the Alternate Tab extension. If you have a different edition installed, you can still install this extension. There are many ways to enable the extension afterward. One way is to use the GNOME Tweak Tool. Use the Software app, or use sudo along with the dnf command, to install these packages if you don’t have them already:

sudo dnf install gnome-shell-extension-alternate-tab gnome-tweak-tool

Open the Tweak Tool from the GNOME Shell overview and go to the Extensions tab. Find AlternateTab and enable it using the On/Off switch:

Alternate Tab extension in GNOME Tweak Tool

Once you follow this process, you can use the new Alt+Tab behavior. Notice how in this example, each terminal has its own entry in the selector control.

 

by Paul W. Frields at June 09, 2017 08:00 AM

xkcd.com

June 08, 2017

Fedora Magazine

Flock submissions close soon

Flock is the annual conference where Fedora contributors meet up in person to collaborate and plan for future Fedora development. This year, Flock is in Cape Cod, Massachusetts, USA from the 29th of August to the 1st of September. If you are a Fedora contributor planning to submit a proposal for a talk or workshop, the submission window closes in a few days on June 15th 2017.

Máirín Duffy has an awesome post  over on the Fedora Community Blog outlining the focus of Flock this year, as well as some great tips on making a great proposal for Flock 2017.

by Ryan Lerch at June 08, 2017 11:27 PM

June 07, 2017

Fedora Magazine

How to verify a Fedora ISO file

After downloading a fresh version of a Fedora ISO, it is a good habit to get in to to verify the downloaded file. The benefits of verification are two-fold: integrity and security. Verification of your ISO confirms if the file you have downloaded was not corrupted during the Download process. Additionally, it also provides a check to help ensure that the ISO you have downloaded is in fact an ISO that the Fedora Project has published.

Verify with Fedora Media Writer

If you use Fedora Media Writer to download your fresh Fedora media, the verification process is super-simple. Fedora Media Writer automatically verifies your download using the appropriate SHA256 hash and MD5 checksum for the image. More details on this automatic verification is available in the Cryptography README in the Fedora Media Writer repository.

Screenshot of a Fedora Workstation ISO downloading in Fedora Media Writer

 

Verify an ISO manually

Verifying an ISO not obtained using Fedora Media Writer is a little more complicated. It requires you to download a CHECKSUM file for the specific ISO you have, and run a handful of commands in the terminal.

1. Get the CHECKSUM for your ISO

When you download a Fedora ISO from getfedora.org, there is a button in the splash page with a link to the CHECKSUM file. Download this file and save it in the same directory as the ISO image itself. However, if you previously downloaded an ISO, or got it from another source like a torrent, the verify page lists all the current CHECKSUMs.

2. Get the Fedora GPG keys & verify your CHECKSUM

The next step is to check the CHECKSUM file itself. To do this, first download the Fedora GPG public keys, and import them using the gpg utility:

curl https://getfedora.org/static/fedora.gpg | gpg --import

Next, use the gpg utility to verify the CHECKSUM file, for example:

gpg --verify-files Fedora-Workstation-25-1.3-x86_64-CHECKSUM

If your CHECKSUM checks out, you will see a line like this in the output:

gpg: Good signature from "Fedora 25 Primary (25) <[email protected]>"

3. Verify the ISO

Now we are sure the CHECKSUM file itself is valid, use it to validate and check the ISO downloaded, for example:

sha256sum -c Fedora-Workstation-25-1.3-x86_64-CHECKSUM

A line similar to the following line is presented if the ISO that you downloaded is valid. (in this example, the ISO is Fedora-Workstation-Live-x86_64-25-1.3.iso):

Fedora-Workstation-Live-x86_64-25-1.3.iso: OK

by Ryan Lerch at June 07, 2017 08:00 AM

xkcd.com

June 05, 2017

Fedora Magazine

Must-have GNOME extension: gTile

Tiling window managers have always interested me, but spending a lot of time tinkering with config files or learning how to wrangle them? Not so much. What I really want is a dead simple way to organize my windows and still use a friendly desktop. And I found it, finally: the gTile extension for GNOME.

If you are using GNOME, pop open Firefox and head to the extension page. Click “on” and go ahead and install the extension.

gTile Overlay

gTile Overlay

Once it’s installed, you can click the tile icon in the GNOME menu up top. You’ll see a little overlay with a bunch of squares. The gTile overlay will hover over an open window, and you can click one (or more) of the squares to place the window in that area.

The row of icons that display dimensions (e.g. “2×2”, “4×4”) controls how the screen is tiled.  If you just want to divvy up your screen in sixteen equal chunks, choose 4×4. If you have a lot of screen real estate and want many more windows, choose the 6×6. (I’m using gTile with a 4K monitor, and go with 3×2 for six windows.)

The really nice thing about gTile is you’re not locked into tiling. New windows aren’t automatically put on the grid, and existing windows aren’t “locked” so you can have the best of both worlds.

gTile works well with multiple workspaces, and with multiple monitors.

If you’re a GNOME user and want a lightweight solution for tiling windows, give gTile a try. It’s been a great tool for me, and maybe it’ll make you more productive as well.

by Joe Brockmeier at June 05, 2017 08:00 AM

xkcd.com

June 02, 2017

Fedora Magazine

Riley Brandt: How do you Fedora?

We recently interviewed Riley Brandt on how he uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming a interviewee.

Who is Riley Brandt?

Riley Brandt uses a Mac running OS X and Adobe for work at a university in western Canada. “The job can be really amazing at times. I’ve watched a doctor perform brain surgery using a robotic arm, been in a lab used for quantum teleportation, met Olympic athletes, and taken photos from a helicopter.” However, for his personal photography work he uses Fedora and FOSS software to process and organize his photographs.

Riley Brandt Photographer

Brandt tried Linux for the first time in 2007. At that time he dual booted Windows and Ubuntu 7.04. He switched to Linux full time with the release of Ubuntu 7.10. Riley switched to Fedora, though, with version 21. “I was a big fan of the GNOME desktop, and I was tired of Ubuntu GNOME always being a version behind.”

His childhood heroes were filmmakers like John Carpenter and John Hughes. When asked for his favorite movies, Riley was hard pressed to name just two. “I love movies, so I don’t really have a favorite. But a couple that I loved growing up were Ferris Bueller’s Day Off and Big Trouble in Little China.”

Riley counts taekwondo and pixel art as hobbies. “I have been practicing taekwondo for the last year. It’s super challenging, but really rewarding.” His interest in pixel art was inspired by video games. “I think there is a lot of beautiful art work in video games and want to try and create some of my own.”

The Fedora community

By the time Brandt was running Fedora 22, he was giving back to the community by submitting bug reports, answering forum questions and writing articles. Riley also has a fantastic series of tutorials on Youtube that cover GIMP, Darktable and general photography workflow in Fedora.

Riley Brandt Photographer

Brandt was initially worried that he would have a more difficult time getting help when he was making the transition from Ubuntu to Fedora. “I was worried that since Fedora has a smaller user base than Ubuntu, I would have trouble getting support. But that wasn’t the case at all. Fedora users were quick to respond to my questions and full of useful info.”

His experience with the Fedora community helped him realize it should not just be the desktop environment or package manager that influences the decision on what distro to use. “Not enough people think about the community. Fedora’s community might be its biggest selling point.”

Riley thinks it’s awesome to see maintainers make changes to Fedora that reflect feedback from the community. “After I made a YouTube video about Fedora for photographers, Luya Tshimbalanga, a Fedora Design Suite maintainer, added extra GIMP plugins to the default install.”

As a content creator Brandt would like to see more applications provide packages for Fedora. “I would like to convince developers of creative applications like Photomatix HDR, Aesprite, and the Blackmagic DaVinci Resolve video editor to provide packages for Fedora, not just for Ubuntu.”

What hardware and software?

Brandt has a desktop computer equipped with an Intel Core i7 processor, 16GB of RAM and an AMD RX 470 video card. His laptop is the Dell XPS 13. Both run Fedora 25 Design Suite. If you are interested you can find Riley’s complete photography workflow in his 2015 blog post here.

The key photography apps are Geeqie, Rapid Photo Downloader, Darktable and GIMP. For recording his video tutorials he uses SimpleScreenRecorder and Kdenlive. “For an image viewer, I am just looking for something simple, fast and color managed. The image viewer just needs to be able to open all major photo formats quickly, so I can review my exported images. Geeqie does all that wonderfully.” Riley does not use Shotwell or Eye of GNOME because he requires color management.

Riley Brandt Desktop

by Charles Profitt at June 02, 2017 08:00 AM

xkcd.com

May 31, 2017

Fedora Magazine

EasyTAG: Organize your music on Fedora

Audio files in formats such as MP3, AAC, and Ogg Vorbis have made music ubiquitous and portable. With the explosive growth of storage capacity, you can store huge libraries of music. But how do you keep all that music organized? Just tag your music. Then you can access it easily locally and in the cloud. EasyTAG is a great choice for tagging music and is available in Fedora.

Many audio file types support tagging, including:

  • MP4, MP3, MP2
  • Ogg Speex, Ogg Vorbis, Ogg Opus
  • FLAC
  • MusePack
  • Monkey Audio

Installing EasyTAG

EasyTAG is easy to install from Fedora repositories. On Fedora Workstation, use the Software tool to find and install it. Or in a terminal, use the sudo command along with dnf:

sudo dnf install easytag

Then launch the program from the Software tool or the application menu for your desktop. EasyTAG’s straightforward interface works well in most desktop environments.

EasyTAG main screen

Tagging music

Select a folder where you have music you want to tag. By default, EasyTAG will also load subfolders. You can select each file and add tag information such as the artist, title, year, and so on. You can also add images to a file in JPG or PNG format, which most players understand.

Files you have altered appear in bold in the file listing. To save each, press Ctrl+S. You can also select the entire list and use Ctrl+Shift+S to save all the files at once.

One of the most powerful features of EasyTAG is the file scanner. The scanner recognizes patterns based on a template you provide. Once you provide the right template and scan files, EasyTAG automatically tags all of them for you. Then you can save them in bulk. This saves a lot of time and frustration when dealing with large libraries.

When you upload your tagged files to a cloud service, your tags allow you to quickly find and play the music you want anytime. Happy tagging!

by Paul W. Frields at May 31, 2017 08:00 AM

xkcd.com

May 29, 2017

Fedora Magazine

Flock 2017 registration and submissions open

Planning is heavily underway for the annual Fedora contributors conference, Flock 2017. The conference is in Cape Cod, Massachusetts USA from August 29 – September 1, 2017. If you’re a contributor, or want to become one, here are some ways you can get involved.

Flock registration

First, registration is now open on the website. The registration this year includes a small fee to offset swag and setup costs per attendee. The fee for USA attendees is $25.

This fee has been scaled via the Big Mac Index to other countries and geographic areas. This means the fee in each country should roughly be the same level of spending, rather than the exact equivalent in local currency. That makes it easier for people in each area to register.

When you register you also have the option to cover other people’s fees. This means anyone can contribute to make it easier for someone else to attend.

Submissions: Talks and workshops

Second, the call for submissions for talks and workshops is also open. However, before submitting, take heed. This year’s conference is highly focused on getting things done! So instead of “state of the project” talks, submissions are encouraged to focus on building skills and participation. Here are some examples of better submission topics:

  • Setting up and using Fedora Atomic Host
  • Gathering user feedback on a Fedora web app
  • Writing package tests in dist-git using Ansible

You can submit your talk or workshop on the same Flock registration site until June 15, 2017

Other resources

There is also a mailing list for communicating with other attendees, as well as a Freenode IRC channel. The website also lists several hints for transportation to the event venue.

We encourage you to get your registration and submission in as soon as you can — the conference will fill up quickly!

 

by Paul W. Frields at May 29, 2017 02:24 PM

xkcd.com

May 26, 2017

Fedora Magazine

Secure your webserver with improved Certbot

A year and a half ago the Let’s Encrypt project entered public beta. Just over a year ago, as the project left beta, the letsencrypt client was spun out of ISRG, which continues to maintain the Let’s Encrypt servers, into an EFF project and renamed certbot. The mission remained the same, however: to provide quick, simple access to free domain validated certificates, in order to encrypt the internet.

This week marked a significant point in the development of Certbot as the recommended Let’s Encrypt client, with the 0.14 release of the tool.

When the letsencrypt client was first released it only supported using the webroot of an existing HTTP server. This is a standalone mode where letsencrypt listens temporarily on port 80 to carry out the challenge, or a manual method where the admin puts the challenge presented into place before the ACME server proceeded to verify it. Now the letsencrypt client is even more functional.

Apache HTTPD plugin for Certbot

When the client was changed to be an EFF project, one of the first major features that appeared was the Apache HTTPD plugin. This plugin lets the Certbot application automatically configure the webserver to use certificates for one or more VirtualServer installations.

NOTE: If you encounter an issue with SELinux in enforcing mode while using the plugin, use the setenforce 0 command to switch to permissive mode when running the certbot –apache command. Afterward, switch back to enforcing mode using setenforce 1. This issue will be resolved in a future update.

When you start the Apache httpd server with mod_ssl, the service automatically generates a self signed certificate.

self signed certificate

Default mod_ssl self signed certificate not trusted by the browser.

Next, run this command:

certbot --apache

Certbot prompts for a few questions. You can also run it non-interactively and provide all the arguments in advance.

Questions at the terminal

After a few moments, the Apache server has a valid certificate in place.

Valid SSL certificate in place

Nginx plugin for Certbot

The nginx plugin requires the domain name in the configuration from my testing, whereas the httpd plugin modifies the default SSL virtualhost.

The process is similar to the httpd plugin. Answer a few questions, if you do not provide arguments on the command line, and the instance is then protected with a valid SSL certificate.

Python 3 compatibility

The Certbot developers have put a significant amount of work over the past several months to make Certbot fully compatible with Python 3. At the 0.12 release, the unit tests we carried out when building the RPMs passed. However, the developers were not yet happy to declare it ready, since they noticed some edge case failures on real world testing. As of the 0.14 release, developers have declared Certbot Python 3 compatible. This change brings it inline with the default, preferred Python version in Fedora.

To minimize possible issues, Rawhide and the upcoming Fedora 26 will be switched over to using certbot-3 first, whilst Fedora 25 remains using certbot-2 as the default.

Getting hooked on renewals

A recent update added a systemd timer to automate renewals of the certificates.The timer checks each day to see if any certificates need updating. To enable it, use this command:

systemctl enable --now certbot-renew.timer

The configuration in /etc/sysconfig/certbot can change the behavior of the renewals. It includes options for hooks that run before and after the renewal, and another hook that runs for each certificate processed. These are global behaviors. Optionally, you can configure hooks in the configuration files in /etc/letsencrypt/renewal on a per-certificate basis.

Some form of automation is advised, whether the systemd timer or another method, to ensure that certificates are refreshed periodically and don’t expire by accident.

Testing SSL security

A test of SSL security with CentOS7 and the Apache plugin provided a C rating. The nginx plugin resulted in a B rating.

Of course the Red Hat defaults lean towards compatibility in mind. If there’s no need to support older clients then you can tighten up the list of permitted ciphers.

Using this configuration on Fedora 25 on my own blog gets an A+ rating:

SSLProtocol all -SSLv2 -SSLv3
SSLCipherSuite "EECDH+aRSA+AESGCM EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA !aNULL !eNULL !LOW !MEDIUM !SEED !3DES !CAMELLIA !MD5 !EXP !PSK !SRP !DSS !RC4"
SSLCertificateFile /etc/pki/tls/certs/www-hogarthuk.com-ssl-bundle.crt
SSLCertificateKeyFile /etc/pki/tls/private/www-hogarthuk.com-decrypted.key

<IfModule mod_headers.c>
      Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"
</IfModule>

What’s next?

There are always bugs to fix and improvements to make. Apart from improvements to SELinux compatibility as mentioned above, there’s also a future to look forward to. DNS based validation will make it easier to take Certbot beyond web servers. Mail, jabber, load balancers and other services can then more easily use Let’s Encrypt certificates using the Certbot client.

by James Hogarth at May 26, 2017 08:00 AM

xkcd.com

May 24, 2017

xkcd.com