Fedora Hub

August 18, 2017

Fedora Magazine

Installing Ring in Fedora 26

Many communication platforms promise to link people together by video, voice, and data. But almost none of them promise or respect user privacy and freedom to a useful extent.

Ring is a universal communication system for any platform. But it is also a fully distributed system that protects users’ confidentiality. One protective feature is that it doesn’t store users personal data in a centralized location. Instead, it decentralizes this data through a combination of OpenDHT and Ethereum blockchain technology. In addition to being distributed, it has other unique features for communication:

  • Cross platform (works on Linux, Windows, MacOS, and Android)
  • Uses only free and open source software
  • Uses standard security protocols and end-to-end encryption
  • Works with desktop applications (like GNOME Contacts)

In July the Savoir-faire Linux team released the stable 1.0 version of Ring. Although it isn’t included in Fedora due to some of its requirements, the Savoir-faire team graciously provides a package for the Fedora community.

How to install Ring

To install, open a terminal and run the following commands:

sudo dnf config-manager --add-repo https://dl.ring.cx/ring-nightly/fedora_26/ring-nightly.repo
sudo dnf install ring

If you’re using an older version of Fedora, or an entirely different platform, check out the download page.

How to setup a RingID

Now that it’s installed, you’re ready to create an account (or link pre-existing one). The RingID allows other users to locate and contact you while still protecting your privacy. To create one:

  1. First, click on Create Ring Account.
  2. Next, add the required information.
  3. Finally, click Next.
Ring welcome screen Ring register user name RingID

The tutorial page offers more information on setting up this useful app. For example, you can learn how to secure your account and add devices which all notify you on a call. To learn more, check out the tutorial page.

 

by rdes at August 18, 2017 08:00 AM

August 17, 2017

Fedora Magazine

5 apps to install on your Fedora Workstation

A few weeks ago, Fedora 26 was released. Every release of Fedora brings new updates and new applications into the official software repositories. Whether you were already a Fedora user and upgraded or you are a first-time user, you might be looking for some cool apps to try out on your Fedora 26 Workstation. In this article, we’ll round up five apps that you might not have known were available in Fedora.

Try out a different browser

By default, Fedora includes the Firefox web browser. But in Fedora 25, Chromium (the open source version of Chrome) was packaged into Fedora. You can learn how to start using and install Chromium below.

How to install Chromium in Fedora

Sort and categorize your music

Do you have a Fedora Workstation filled with local music files? When you open it in a music player, is there missing or just straight out wrong metadata? MusicBrainz is the Wikipedia of music metadata, and you can take back control of your music by using Picard. Picard is a tool that works with the MusicBrainz database to pull in correct metadata to sort and organize your music. Learn how to get started with Picard in Fedora Workstation below.

Picard brings order to your music library

Get ready for the eclipse

August 21st is the big day for the total solar eclipse in North America. Want to get a head start by knowing the sky before it starts? You can map out the sky by using Stellarium, an open source planetarium application available in Fedora now. Learn how to install Stellarium before the skies go dark in this article.

Track the night sky with Stellarium on Fedora

Control your camera from Fedora

Have an old camera lying down? Or maybe do you want to upgrade your webcam by using an existing camera? Entangle lets you take control of your camera all from the comfort of your Fedora Workstation. You can even adjust aperture, shutter speed, ISO settings, and more. Check out how to get started with it in this article.

Tether a digital camera using Entangle

Share Fedora with a friend

One of the last things you might need to do with your Fedora Workstation is extend it! With the Fedora Media Writer, you can create a USB stick loaded with any Fedora edition or spin of your choice and share it with a friend. Learn how to start burning your own USB drives in this how-to article below.

How to make a Fedora USB stick

by Justin W. Flory at August 17, 2017 08:00 AM

August 16, 2017

xkcd.com

August 14, 2017

Fedora Magazine

Fedora Classroom Session 4

The Fedora Classroom sessions continues this week. You can find the general schedule for sessions on the wiki. You can also find resources and recordings from previous sessions there.

Here are details about this week’s session on Friday, August 18 at 1300 UTC.

Instructor

Eduard Lucena is an IT Engineer and an Ambassador from the LATAM region. He started working with the community by publishing a simple article in the Magazine. Right now he actively works in the Marketing group and aims to be a FAmSCO member for the Fedora 26 release. He works in the telecommunication industry and uses the Fedora Cinnamon Spin as his main desktop, both at work and home. He isn’t a mentor, but tries to on-board people into the project by teaching them how to join the community in any area. His motto is: “Not everything is about the code.”

Topic: Vim 101

Like many classic utilities developed during UNIX’s early years, vi has a reputation for being hard to navigate. Bram Moolenaar’s enhanced and optimized clone, Vim (“vi Improved“), is the default editor in almost all UNIX-like systems. The world’s come a long way since Vim was written. Even though the system resources have grown, many still stick with the Vim editor, including Fedora.

This hands-on session will teach you about the different Vim versions packaged in Fedora. Then, we’ll go deeper into how to use this powerful tool. We’ll also teach you how not to flounder trying to close the editor!

Joining the session

Since this is a hands-on session, you’ll want to have a Linux installation to follow it properly. Preferably you’ll have Vim installed with full features. If you don’t have it, don’t worry — you’ll learn how to install it and what the differences are. No prior knowledge of the Vim editor is required.

This session will be held via IRC. The following information will help you join the session:

We hope you can attend and enjoy this experience from some of the people that work in the Fedora Project.


Photograph used in feature image is San Simeon School House by Anita Ritenour — CC-BY 2.0.

by Eduard Lucena at August 14, 2017 08:43 PM

xkcd.com

August 11, 2017

Fedora Magazine

Fedora August 2017 election change

UPDATE (2017-Aug-14): The Fedora Engineering Steering Committee (FESCo) voting also had to be rescheduled due to a candidate listing error. It will run during the same time as the FAMSCo voting period listed below. You can read more information here.

As seen earlier this week, the Fedora community holds elections in several groups. One group that elects seats this month is the Fedora Ambassador Steering Committee (FAMSCo).

The FAMSCo election started along with others this week. However, due to a technical error, the voting system prevented some eligible people from voting. Contributors have now fixed this issue. Fedora Program Manager Jan Kurik announced the issue and the fix on the Ambassadors’ mailing list.

What does this mean?

Of course the project wants to ensure the election is open and fair for all. Therefore, the FAMSCo election will restart next week. The new voting period begins on Tuesday, August 15 at 0000 UTC (click the link for local time). It ends on Monday, August 21 at 2359 UTC.

Votes from the original FAMSCo election do not count. Only the new votes are valid. So if you voted in the original election, you must vote again to be counted.

What about other elections?

As mentioned in the announcement, the Fedora Council and FESCo elections continue unaffected. The announcement here contains additional details.

by Paul W. Frields at August 11, 2017 03:40 PM

How to upgrade from Fedora 25 Atomic Host to 26

In July the Atomic Working Group put out the first and second releases of Fedora 26 Atomic Host. This article shows you how to prepare an existing Fedora 25 Atomic Host system for Fedora 26 and do the upgrade.

If you really don’t want to upgrade to Fedora 26 see the later section: Fedora 25 Atomic Host Life Support.

Preparing for Upgrade

Before you perform an update to Fedora 26 Atomic Host, check the filesystem to verify that at least a few GiB of free space exists in the root filesystem. The update to Fedora 26 may retrieve more than 1GiB of new content (not shared with Fedora 25) and thus needs plenty of free space.

Luckily Upstream OSTree has implemented some filesystem checks to ensure an upgrade stops before it fills up the filesystem.

The example here is a Vagrant box. First, check the free space available:

[vagrant@host ~]$ sudo df -kh /
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/atomicos-root  3.0G  1.4G  1.6G  47% /

Only 1.6G free means the root filesystem probably needs to be expanded to make sure there is plenty of space. Check the free space by running the following commands:

[vagrant@host ~]$ sudo vgs
  VG       #PV #LV #SN Attr   VSize  VFree
  atomicos   1   2   0 wz--n- 40.70g 22.60g
[vagrant@host ~]$ sudo lvs
  LV          VG       Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  docker-pool atomicos twi-a-t--- 15.09g             0.13   0.10                            
  root        atomicos -wi-ao----  2.93g

The volume group on the system in question has 22.60g free and the atomicos/root logical volume is 2.93g in size. Increase the size of the root volume group by 3 GiB:

[vagrant@host ~]$ sudo lvresize --size=+3g --resizefs atomicos/root
  Size of logical volume atomicos/root changed from 2.93 GiB (750 extents) to 5.93 GiB (1518 extents).
  Logical volume atomicos/root successfully resized.
meta-data=/dev/mapper/atomicos-root isize=512    agcount=4, agsize=192000 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=768000, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 768000 to 1554432
[vagrant@host ~]$ sudo lvs
  LV          VG       Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  docker-pool atomicos twi-a-t--- 15.09g             0.13   0.10                            
  root        atomicos -wi-ao----  5.93g

The lvresize command above also resized the filesystem all in one shot. To confirm, check the filesystem usage:

[vagrant@host ~]$ sudo df -kh /
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/atomicos-root  6.0G  1.4G  4.6G  24% /

Upgrading

Now the system should be ready for upgrade. If you do this on a production system, you may need to prepare services for downtime.

If you use an orchestration platform, there are a few things to note. If you use Kubernetes, refer to the later section on Kubernetes: Upgrading Systems with Kubernetes. If you use OpenShift Origin (i.e. via being set up by the openshift-ansible installer), the upgrade should not need any preparation.

Currently the system is on Fedora 25 Atomic Host using the fedora-atomic/25/x86_64/docker-host ref.

[vagrant@host ~]$ rpm-ostree status
State: idle
Deployments:
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
                Version: 25.154 (2017-07-04 01:38:10)
                 Commit: ce555fa89da934e6eef23764fb40e8333234b8b60b6f688222247c958e5ebd5b

In order to do the upgrade the location of the Fedora 26 repository needs to be added as a new remote (like a git remote) for ostree to know about:

[vagrant@host ~]$ sudo ostree remote add --set=gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-26-primary fedora-atomic-26 https://kojipkgs.fedoraproject.org/atomic/26

It can be seen from the command that a new remote known as fedora-atomic-26 was added with a remote url of https://kojipkgs.fedoraproject.org/atomic/26. The gpgkeypath variable was also set in the configuration for the remote. This tells OSTree that it should verify commit signatures when downloading from a remote.This is something new that was enabled for Fedora 26 Atomic Host.

Now that the system has the fedora-atomic-26 remote the upgrade can be performed:

[vagrant@host ~]$ sudo rpm-ostree rebase fedora-atomic-26:fedora/26/x86_64/atomic-host

Receiving metadata objects: 0/(estimating) -/s 0 bytes
Signature made Sun 23 Jul 2017 03:13:09 AM UTC using RSA key ID 812A6B4B64DAB85D
  Good signature from "Fedora 26 Primary <fedora-26-primary@fedoraproject.org>"

Receiving delta parts: 0/27 5.3 MB/s 26.7 MB/355.4 MB
Signature made Sun 23 Jul 2017 03:13:09 AM UTC using RSA key ID 812A6B4B64DAB85D
  Good signature from "Fedora 26 Primary <fedora-26-primary@fedoraproject.org>"

27 delta parts, 9 loose fetched; 347079 KiB transferred in 105 seconds                                                                                                                                            
Copying /etc changes: 22 modified, 0 removed, 58 added
Transaction complete; bootconfig swap: yes deployment count change: 1
Upgraded:
  GeoIP 1.6.11-1.fc25 -> 1.6.11-1.fc26
  GeoIP-GeoLite-data 2017.04-1.fc25 -> 2017.06-1.fc26
  NetworkManager 1:1.4.4-5.fc25 -> 1:1.8.2-1.fc26
  ...
  ...
  setools-python-4.1.0-3.fc26.x86_64
  setools-python3-4.1.0-3.fc26.x86_64
Run "systemctl reboot" to start a reboot
[vagrant@host ~]$ sudo reboot
Connection to 192.168.121.217 closed by remote host.
Connection to 192.168.121.217 closed.

After reboot the status looks like:

$ vagrant ssh
[vagrant@host ~]$ rpm-ostree status
State: idle
Deployments:
● fedora-atomic-26:fedora/26/x86_64/atomic-host
                Version: 26.91 (2017-07-23 03:12:08)
                 Commit: 0715ce81064c30d34ed52ef811a3ad5e5d6a34da980bf35b19312489b32d9b83
           GPGSignature: 1 signature
                         Signature made Sun 23 Jul 2017 03:13:09 AM UTC using RSA key ID 812A6B4B64DAB85D
                         Good signature from "Fedora 26 Primary <fedora-26-primary@fedoraproject.org>"

  fedora-atomic:fedora-atomic/25/x86_64/docker-host
                Version: 25.154 (2017-07-04 01:38:10)
                 Commit: ce555fa89da934e6eef23764fb40e8333234b8b60b6f688222247c958e5ebd5b
[vagrant@host ~]$ cat /etc/fedora-release
Fedora release 26 (Twenty Six)

The system is now on Fedora 26 Atomic Host. If this were a production system now would be a good time to check services, most likely running in containers, to see if they still work. If a service didn’t come up as expected, you can use the rollback command: sudo rpm-ostree rollback.

To track updated commands for upgrading Atomic Host between releases, visit this wiki page.

Upgrading Systems with Kubernetes

Fedora 25 Atomic Host ships with Kubernetes v1.5.3, and Fedora 26 Atomic Host ships with Kubernetes v1.6.7. Before you upgrade systems participating in an existing Kubernetes cluster from 25 to 26, you must make a few configuration changes.

Node Servers

In Kubernetes 1.6, the --config argument is no longer valid. If systems exist that have the KUBELET_ARGS variable in /etc/kubernetes/kubelet that point to the manifests directory using the --config argument, you must change the argument name to --pod-manifest-path. Also in KUBELET_ARGS, add an additional argument: --cgroup-driver=systemd.

For example, if the /etc/kubernetes/kubelet file started with the following:

KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/manifests --cluster-dns=10.254.0.10 --cluster-domain=cluster.local"

Then change it to:

KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --pod-manifest-path=/etc/kubernetes/manifests --cluster-dns=10.254.0.10 --cluster-domain=cluster.local --cgroup-driver=systemd"

Master Servers

Staying With etcd2

From Kubernetes 1.5 to 1.6 upstream shifted from using version 2 of the etcd API to version 3. The Kubernetes documentation instructs users to add two arguments to the KUBE_API_ARGS variable in the /etc/kubernetes/apiserver file:

--storage-backend=etcd2 --storage-media-type=application/json

This ensures that Kubernetes continues to find any pods, services or other objects stored in etcd once the upgrade has been completed.

Moving To etcd3

You can migrate etcd data to the v3 API later. First, stop the etcd and kube-apiserver services. Then, assuming the data is stored in /var/lib/etcd, run the following command to migrate to etcd3:

# ETCDCTL_API=3 etcdctl --endpoints https://YOUR-ETCD-IP:2379 migrate --data-dir=/var/lib/etcd

After the data migration, remove the --storage-backend=etcd2 and --storage-media-type=application/json arguments from the /etc/kubernetes/apiserver file and then restart etcd and kube-apiserver services.

Fedora 25 Atomic Host Life Support

The Atomic WG decided to keep updating the fedora-atomic/25/x86_64/docker-host ref every day when Bodhi runs within Fedora. A new update is created every day. However, it is recommended you upgrade systems to Fedora 26 because future testing and development focus on Fedora 26 Atomic Host. Fedora 25 OSTrees won’t be explicitly tested.

Conclusion

The transition to Fedora 26 Atomic Host should be a smooth process. If you have issues or want to be involved in the future direction of Atomic Host, please join us in IRC (#atomic on freenode) or on the atomic-devel mailing list.

by Dusty Mabe at August 11, 2017 12:28 PM

xkcd.com

August 09, 2017

Fedora Magazine

William Beauford and Bryan Rhodes: How Do You Fedora?

We recently interviewed William Beauford and Bryan Rhodes on how they use Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming a interviewee.

Who is William Beauford?

William Beauford

William Beauford is a software developer. He currently works on a video communication platform for inmates. The program allows inmates to communicate with their friends and family. He started using Linux in high school. He started with Ubuntu mostly as an on and off again hobby. William switched to Linux full time in 2015.

William is inspired by Chris Jericho. “I’ve always admired how Chris Jericho traveled the world learning many different styles to create his own. I try to mirror that by learning different programming languages, frameworks, etc. to build up my skill set.”

William values loyalty, honesty, passion, compassion and guile. His favorite food is hamburgers. The Shawshank Redemption and The Dark Knight are his favorite movies.

Who is Bryan Rhodes?

Bryan and his wife Emory

Bryan Rhodes is a software engineer who works for an inmate communication company. “I am the lead software engineer for a video visitation system that allows inmates to visit with their loved ones at home through messaging and WebRTC video chats,” Bryan stated. He has been in the industry for four years and really enjoys his job. “I think my favorite part of our product is that our entire platform runs on 100% Linux.”

Bryan started using Linux in 2004. His first distribution was Fedora Core 2. He danced between several different distributions before settling on Fedora. “I started off using Fedora Core 2 and switched between Slackware, Gentoo and Ubuntu 4.10. I eventually settled with Fedora Core 5 and have used it primarily over the years except for on production servers where I use CentOS.”

Bryan hopes that when he ages he still has a strong passion for technology like his childhood hero Steve Wozniak. “I really love his curious mind, hacker outlook and that personable feeling you get from watching his interactions with people around him. I hope that as I age, I age with a continued love for technology like he has.”

Bryan’s favorite food is sushi. “My wife and I have a routine of eating sushi for our date nights and destressing after a long week, so I enjoy it for the taste and the quality time.”

His latest fascination and hobby is drones. “I have been using drones and Raspberry Pi’s to create geospatial maps of areas. Right now I am working on mapping my family’s 840 acre farm to give later generations an idea of how it has changed over time.”

The Fedora community

William is impressed with how easy it is to find a solution. “Most of the time, someone has already been through the issue you’re experiencing and knows exactly how to fix it,” he says.

Bryan is impressed with how fast Fedora innovates and patches. He said, “I think one of the things that really stuck out to me with Fedora so many years ago was when Xen was first growing and gaining traction. Fedora seemed to develop and innovate more features along with quickly patching bugs than any other OS out there.” Bryan and William would like to see Fedora become a rolling release.

What hardware?

William recently built his own gaming PC. It is equipped with an AMD Ryzen 7 processor and an Nvidia GTX 1080Ti video card. His currently laptop is a Lenovo Thinkpad X201 equipped with a Core i7 processor, 8GB of RAM and a 256GB SSD drive. William says, “This laptop has had the best compatibility I’ve ever seen during my experience. It’s a little old but it gets the job done and I’m looking forward to upgrading to a Dell XPS 13 when the time comes.”

The Ryzen 7 machine is equipped with a Fractal Design Celsius S36 AIO cooler. It has 16GB of Cosair Vengeance Ram, a Gigabyte GA-AX370 Gaming 5 and an Asus ROG STRIX GTX 1080TI. The case is a red NZXT H440. Beauford is a big Star Wars fan, so the hostname is Kylo-Ren.

Bryan also uses a Lenovo. “I am currently running an older, but stable, Lenovo x201 Tablet laptop.” The laptop is equipped with 8GB of RAM and a 256GB SSD. He says, “It does everything I need it to do while also being lightweight.”

What Software?

William is currently running Fedora 25. William uses VSCode with an Emacs plugin to write the majority of his code. He says, “To spice up my mundane bash shell I use Powerline to give it a little more personality.” He adds Dash to Dock and Applications Menu extensions to his GNOME desktop.

Bryan runs Fedora 25 on the X201. “I write all of my code in VSCode these days after many of years of using Emacs in my GNOME terminal,” he says. He likes VSCode because Java, Golang, PHP and SQL plugins are easy to install. All of his code is packaged into Docker containers. The containers are pushed to a Docker registry running on top of CentOS and then into a Kubernetes cluster.

by Charles Profitt at August 09, 2017 10:00 AM

xkcd.com

August 08, 2017

Fedora Magazine

Fedora Classroom Session 3

The Fedora Classroom sessions continue this week. You can find the general schedule for sessions on the wiki. You can also find resources and recordings from previous sessions there.

Here are details about this week’s session on Thursday, August 10 at 1300 UTC.

Instructor

Ankur Sinha (“FranciscoD”) is a Free Software supporter and has been with the Fedora community for the better part of a decade now. Rahul Sundaram mentored him as font package maintainer in his early days with Fedora. Ankur has since branched out to acquaint himself with many other teams and SIGs.

He is a Fedora Workstation user, and prefers to use the terminal as much as possible. Currently, he is working on his PhD in computational neuroscience in the UK. When he does have time to spare, he focuses on the Fedora Join SIG and on maintaining his packages.

He can be reached via his Fedora project e-mail or on one of the many Fedora IRC channels. Feel free to ping him.

Topic: Command Line 101

Over the years, the command line has developed a reputation for being rather hard core, only apt for advanced users of Linux. This is entirely false. While GUI tools have their benefits, they are not well suited for all types of tasks. The command line provides a set of powerful tools and operators. It lets you carry out a myriad of tasks easily and efficiently. Often you don’t even need to move your fingers off the home row.

This hands-on session starts with a quick theoretical introduction. Then we’ll go over a set of useful command line tools. Finally we’ll look at some advanced topics such as I/O redirection. This will give you a taste of how easy and powerful the command line is.

Joining the session

Since it is a hands-on session, it will be useful to have a Linux installation to follow it properly. No prior knowledge of the command line is required. This session will be held via IRC. The following information will help you join the session:

We hope you can attend and enjoy this experience from some of the people that work in the Fedora Project.


Photograph used in feature image is San Simeon School House by Anita Ritenour — CC-BY 2.0.

by Eduard Lucena at August 08, 2017 10:00 AM

August 07, 2017

Fedora Magazine

Fedora August 2017 elections beginning

UPDATE (2017-Aug-11): The FAmSCo and FESCo elections described below have been rescheduled. Read this Magazine article for details.

Twice a year, a new version of Fedora is released. The entire Fedora community is a part of the process, from packaging new updates, creating wallpapers, hosting our websites, and spreading the word at conferences and events. Fedora is a big community, and a few groups help lead in different areas of the community. These groups offer guidance and direction in technical and non-technical areas of Fedora. After every release, a round of elections for these groups begins. Nominated Fedora contributors from across the project campaign for different seats on the three leadership groups. Election week is this week!

What are Fedora’s leadership groups?

There are three main leadership groups in Fedora: the Fedora Council, the Fedora Ambassador Steering Committee (FAmSCo), and the Fedora Engineering Steering Committee (FESCo).

The Fedora Council is the top-level community leadership and governance body. The Council is a mix of representatives from different areas of the project, named roles appointed by Red Hat, and a variable number of seats connected to medium-term project goals. Decision-making is a consensus process, where the Council works together as a common team to find shared solutions and address concerns, with a focus on giving voice than on balance of power.

Additionally, the Fedora Ambassador Steering Committee, or FAmSCo, provides guidance and organization to the Fedora Ambassadors, the representatives and advocates of Fedora. FAmSCo works to enable regional leaders to grow their communities and help guide regions to be consistent and organized. All the seats on FAmSCo are elected Ambassadors from the community.

Lastly, the Fedora Engineering Steering Committee, or FESCo, provides technical leadership and guidance in Fedora. Furthermore, FESCo handles the process of accepting new features, accepting new packaging sponsors, Special Interest Groups (SIGs) and SIG Oversight, the packaging process, handling and enforcement of maintainer issues and other technical matters related to the distribution and its construction. All of the FESCo seats are also elected by the community.

Voting opens Tuesday, August 8th

Today is the last day for the “campaign” part of the election. Candidates for all elections are provided with a list of questions from the community to answer on the Community Blog. Soon, voting officially opens on Tuesday, August 8th, 2017 and closes on Monday, August 14th, 2017 at 23:59 UTC. Voting takes place on the Voting application website.

To take part in the elections, you need to have these requirements:

  • Council: Created a Fedora account, signed the CLA
  • FAmSCo: Created a Fedora account, belong to one or more community group(s)
  • FESCo: Created a Fedora account, belong to one or more community group(s)

As part of the Elections coverage on the Community Blog, most candidates published their interviews and platforms there. Are you getting ready to vote and looking for this information? You can find the full list of candidates and links to their interviews below.

Candidate Interviews

Fedora Council

One seat is open in the Fedora Council.

  • Dennis Gilmore (dgilmore / ausil) [no interview published / wiki]
  • Justin W. Flory (jwf / jflory7) [interview / wiki]
  • Langdon White (langdon) [no interview published / wiki]
  • Nick Bebout (nb) [no interview published / wiki]
  • Till Maas (tyll / till) [no interview published / wiki]

Fedora Ambassador Steering Committee (FAmSCo)

Three seats are open in FAmSCo.

  • Andrew Ward (award3535) [interview / wiki]
  • Alex Oviedo Solis (alexove) [interview / wiki]
  • Ben Williams (Southern_Gentlem / jbwillia) [no interview published / wiki]
  • Daniel Lara (danniel) [no interview published / wiki]
  • Itamar Reis Peixoto (itamarjp) [no interview published / wiki]
  • Eduard Lucena (x3mboy) [interview / wiki]
  • Eduardo Echeverria (echevemaster) [interview / wiki]
  • Nick Bebout (nb) [no interview published / wiki]
  • Sirko Kemter (gnokii) [no interview published / wiki]
  • Sumantro Mukherjee (sumantrom) [interview / wiki]

Fedora Engineering Steering Committee (FESCo)

Four seats are open in FAmSCo.

Vote!

Remember, the voting period starts tomorrow and ends next Monday, so make sure you get your votes in before the end of the Election. You can vote on the Voting application.

by Justin W. Flory at August 07, 2017 08:00 AM

xkcd.com

August 04, 2017

Fedora Magazine

Add speech to your Fedora system

By default, Fedora Workstation ships a small package called espeak. It adds a speech synthesizer — that is, text-to-speech software.

In today’s world, talking devices are nothing impressive as they’re very common. You can find speech synthesizers even in your smartphone, a product like Amazon Alexa, or in the announcements at the train station. In addition, synthesized voices are nowadays more or less similar to human speech. We live in a 1980s science fiction movie!

The voice produced by espeak may sound a bit primitive compared to the aforementioned tools. But at the end of the day espeak produces good quality speech. And whether you find it useful or not, at least it can provide some amusement.

Running espeak

In espeak you can set various parameters using command line options. Examples include:

  • amplitude (-a)
  • pitch adjustment (-p)
  • speed of sentences (-s)
  • gap between words (-g)

Each of these options produces various effects and may help you achieve a cleaner voice.

You can also select different voice variants with command line options. For example, try -ven+m3 for a different English male voice, and -ven+f1 for a female one. You can also use different languages. For a list, run this command:

espeak --voices

Note that many languages other than English are experimental attempts.

To create a WAV file instead of actually speaking something, use the -w option:

espeak -w out.wav "Audio file test"

The espeak utility also reads the content of a file for you.

espeak -f plaintextfile

Or you can pass the text to speech from the standard input. In this way, as a simple example, you can build a talking box that alerts you to an event using a voice. Your backup is completed? Add this command to the end of your script:

echo "Backup completed" | espeak -s 160 -a 100 -g 4

Suppose an error shows up in a log file:

tail -1F /your/log/file | grep --line-buffered 'ERROR' | espeak

Or perhaps you want a speaking clock telling you every minute what time it is:

while true; do date +%S | grep '00' && date +%H:%M | espeak; sleep 1; done
As you can guess, use cases are limited only by your imagination. Enjoy your new talking Fedora system!

by Alessio Ciregia at August 04, 2017 12:03 PM

xkcd.com

August 02, 2017

xkcd.com

August 01, 2017

Fedora Magazine

Fedora Classroom Session 2

The Fedora Classroom sessions continues this week. You can find the general schedule for sessions on the wiki. You can also find resources and recordings from previous sessions there.

Here are details about this week’s session.

Instructor

Eduard Lucena is an IT Engineer and an Ambassador from the LATAM region. He started working with the community by publishing a simple article in the Magazine. Right now he actively works in the Marketing group and aims to be a FAmSCO member for the Fedora 26 release. He works in the telecommunication industry and uses the Fedora Cinnamon Spin as his main desktop, both at work and home. He isn’t a mentor, but tries to on-board people into the project by teaching them how to join the community in any area. His motto is: “Not everything is about the code.”

Topic: Starting in the Fedora community with the Fedora Magazine

This session is a short guide on how to start working with the Fedora community by writing articles for the Fedora Magazine. When you finish this session, you’ll know the main SIGs and WGs and be able to find your way around the community.

Joining the session

This session will be held via Jitsi. The following information will help you join the session:

We hope you can attend and enjoy this experience from some of the people that work in the Fedora Project.


Photograph used in feature image is San Simeon School House by Anita Ritenour — CC-BY 2.0.

by Eduard Lucena at August 01, 2017 08:00 AM

July 31, 2017

Fedora Magazine

Fedora 24 End of Life

With the recent release of Fedora 26, Fedora 24 officially enters End Of Life (EOL) status on August 8th, 2017. After August 8th, all packages in the Fedora 24 repositories no longer receive security, bugfix, or enhancement updates. Furthermore, no new packages will be added to the Fedora 24 collection.

Upgrading to Fedora 25 or Fedora 26 before August 8th 2017 is highly recommended for all users still running Fedora 24:

Looking back at Fedora 24

Fedora 24 was released in June 2016. During this time the Fedora Community published over 10 500 updates to the Fedora 24 Repositories. Fedora 24 released with version 4.5 of the Linux kernel, and Fedora Workstation featured version 3.20 of GNOME.

Screenshot of Fedora 24 Workstation

Fedora 24 Workstation screenshot

About the Fedora Release Cycle

The Fedora Project provides updates for a particular release until a month after the second subsequent version of Fedora is released. For example, updates for Fedora 25 continue until one month after the release of Fedora 27. Fedora 26 continues to be supported up until one month after the release of Fedora 28.

The Fedora Project wiki contains more detailed information about the entire Fedora Release Life Cycle. The lifecycle includes milestones from development to release, and the post-release support period.

by Matthew Miller at July 31, 2017 02:36 PM

xkcd.com

July 28, 2017

Fedora Magazine

Enhancing photos with GNOME Photos

GNOME Photos is an application that lets you easily organize photos and screenshots. GNOME Photos doesn’t enforce a folder hierarchy. Instead, it relies on tracker to find and index photos inside well-known folders, such as Pictures in your home folder (~/Pictures).

Photos has steadily grown in its ability to edit pictures. For instance, GNOME Photos 3.24 recently added two new color adjustment options: exposure and blacks. Of course, the subject of digital photo editing is a deep one. Without going into too much detail, this article demonstrates the basic photo editing options available in GNOME Photos.

First, install Photos from the Software tool, or by using dnf along with sudo:

sudo dnf install gnome-photos

Run the app from the Overview by searching for “Photos.” The first time you run Photos, it populates its Photos tab with all image files found in ~/Pictures:

Adjusting Colors

Double-click on the photo you wish to edit. Click the pencil icon to open the Edit panel:

Click Colors, and a series of sliders expands.

These sliders control various enhancement effects to the image. This photo ended up under-exposed; the sensor probably detected too much light from the sky, leaving the beach without enough exposure time. Adjust the Exposure slider to the right and the image will update with an enhanced exposure.

Slide up the Saturation to deepen the sea and sky blues.

Applying a filter

Photos also ships with some canned filters. These filters apply preset adjustments to the photo to give it a particular style. For example, the Calistoga filter gives the beach a vintage/retro look:

Be sure to click Done to save the changes.

Non-destructive edits

Any time you edit a photo in Photos, it preserves the original. You can return to the original if you don’t like how your changes turn out. Open the Properties menu item and click Discard all Edits to revert the file to its original copy.

Set a background

Photos can also set pictures as the desktop background. First, crop the photo to your desired aspect ratio — 16×9 in this example.

Next, click Done. Then select the newly edited picture, and from the menu, select Set as Background.

Photos can do more than edit. It also integrates with GNOME Online Accounts, and can be set up to share photos to various online photo services. Photos also lets you organize your photos into albums. It even detects screenshots and automatically sorts them into a Screenshots album for you!

by Link Dupont at July 28, 2017 08:00 AM

xkcd.com

July 27, 2017

Fedora Magazine

Fedora Classroom Sessions are here!

The Fedora Join SIG is proud to announce Classroom sessions. The Fedora Classroom is a project to teach interested users how to better use, understand and manage their Fedora system, and to show how the community works. The idea is to reach interested people and, if they desire, bring them closer to the Fedora community.

Almost all classes will be held on IRC in the #fedora-classroom channel on Freenode (irc.freenode.net). If you’re not familiar with IRC, check out the Beginner’s guide to IRC. Also we’ll use BlueJeans, a video conferencing platform that works from browsers, mobile devices and a desktop application. If you have trouble connecting to Blue Jeans, please refer to the support page.

The schedule

The following Classroom sessions are currently scheduled (subject to change):

Date Time (UTC) Class topic and Instructor
2017-07-28 13:00 UTC – 14:30 UTC FOSS 101 – David Kaspar
2017-08-04 15:00 UTC – 16:00 UTC Fedora Magazine 101 – Eduard Lucena
2017-08-07 – 2017-08-11 TBD Command line 101 – Ankur Sinha “FranciscoD”
2017-08-14 – 2017-08-18 TBD VIM 101 – Eduard Lucena/Ankur Sinha “FranciscoD”
2017-08-21 – 2017-08-25 TBD Emacs 101 – Sachin Patil
2017-08-28 – 2017-09-01 TBD Fedora QA 101 – Sumantro Mukherjee/Amita Sharma
2017-09-04 – 2017-09-08 TBD Git 101 – Ankur Sinha “FranciscoD”
2017-09-11 – 2017-09-15 TBD Fedora packaging 101 – Ankur Sinha “FranciscoD”

This week’s session

Here are details about this Friday’s upcoming session.

Instructor

David Kašpar (a.k.a. Dee’Kej) started working for Red Hat as an intern in Quality Engineering in 2012. Nowadays, he’s a package maintainer for both Fedora and Red Hat Enterprise Linux. In addition, he’s also a strong believer in free/libre and open source software principles, advocating for their usage even outside the IT industry. He regularly introduces students to meritocracy, the open source world, and the so-called Open Source Way. But he has other big passions as well — music, gaming and capoeira (Brazilian martial arts).

Topic: FOSS 101

A fly-through the history of Free/Libre & Open Source (FOSS) to let you know how it all started, how did it go, what have we achieved so far, and what can we expect in the future.

Joining the session

This session will be held via BlueJeans. The following information will help you join the session:

We hope you can attend and enjoy this experience from some of the people that work in the Fedora Project.


Photograph used in feature image is San Simeon School House by Anita RitenourCC-BY 2.0

by Eduard Lucena at July 27, 2017 03:44 PM

July 26, 2017

Fedora Magazine

How to use the same SSH key pair in all AWS regions

This article shows how to use the AWS Command Line Interface (AWS CLI) to configure a single SSH key pair on multiple AWS Regions. By doing this you can access EC2 instances from different regions using the same SSH key pair.

Installing and configuring AWS CLI

Start by installing and configuring the AWS command line interface:

sudo dnf install awscli
aws configure

Verify the AWS CLI installed correctly:

aws --version
aws-cli/1.11.109 Python/3.6.1 Linux/4.11.10-300.fc26.x86_64 botocore/1.5.72

Configuring the SSH key pair

If you don’t have an SSH key pair or want to follow this article using a new one:

openssl genrsa -out ~/.ssh/aws.pem 2048
ssh-keygen -y -f ~/.ssh/aws.pem > ~/.ssh/aws.pub

If you already have an SSH private key created using the AWS Console, extract the public key from it:

ssh-keygen -y -f ~/.ssh/aws.pem > ~/.ssh/aws.pub

Importing the SSH key pair

Now that you have the public key, declare the variable AWS_REGION containing a list with the regions to which you want to copy your SSH key. To check the full list of available AWS regions use this link.

AWS_REGION="us-east-1 us-east-2 us-west-1 us-west-2 ap-south-1 eu-central-1 eu-west-1 eu-west-2"

If you don’t want to specify each region manually, you can use the ec2 describe-regions command to get a list of all available regions:

AWS_REGION=$(aws ec2 describe-regions --output text | awk '{print $3}' | xargs)

Next, import the SSH public key to these regions, substituting your key’s name for <MyKey>:

for each in ${AWS_REGION} ; do aws ec2 import-key-pair --key-name <MyKey> --public-key-material file://~/.ssh/aws.pub --region each ; done

Also, if you want to display which SSH key is available in a region:

aws ec2 describe-key-pairs --region REGION

To delete an SSH key from a region:

aws ec2 delete-key-pair --key-name <MyKey> --region REGION

Congratulations, now you can use the same SSH key to access all your instances in the regions where you copied it. Enjoy!

by Diego Roberto dos Santos at July 26, 2017 08:00 AM

xkcd.com

Fedora黑

July 25, 2017

Fedora Magazine

Announcing Boltron: The Modular Server Preview

The Modularity and Server Working Groups are very excited to announce the availability of the Boltron Preview Release. Boltron is a bit of an anomaly in the Fedora world — somewhere between a Spin and a preview for the future of Fedora Server Edition. You can find it, warts (known issues) and all, by following the directions below to grab a copy and try it out.

Fedora’s Modularity Working Group (and others) have been working for a while on a Fedora Objective. The Objective is generically called “Modularity,” and its crux is to allow users to safely access the right versions of what they want. However, there are two major aspects of “accessing the right versions.”

The first aspect deals with the problem of installing multiple versions of something in the same user space. In other words, the user may want httpd-2.4 and httpd-2.6 installed and running at once. There are countless solutions to this problem, with different tradeoffs and primary goals. For example:

  • Python natively allows this
  • Software Collections munge binaries into their own namespace on disk
  • Containers namespace most aspects of the running binaries away from the default user space.

Early on, the Modularity WG decided not to focus on solving this problem yet again. Rather, they promoted and encouraged OCI containers, System Containers, and Flatpaks to address the different use cases in this space. Watch for another announcement about using System Containers with Boltron in a few weeks.

There are other solutions, but in the interest of time, the Working Group has focused on the other aspect, availability of multiple versions. At first glance, this may seem to be a simple problem. That is, until you review the Fedora infrastructure and see the tight coupling of our packaging and the concept of the “Fedora Release” (e.g. F25, F26, etc) with everything Fedora builds and ships.

The Working Group also took on the requirement to impact the Fedora Infrastructure, user base, and packager community as little as possible. The group also wanted to increase quality and reliability of the Fedora distribution, and drastically increase the automation, and therefore speed, of delivery.

As a result, the group didn’t treat this as a greenfield experiment that would take years to harden and trust. Instead, they kept the warts and wrinkles with the toolset, and implemented tools and procedures that slightly adjusted the existing systems to provide something new. Largely, the resultant package set can be thought of as virtualized, separate repositories. In other words, the client tooling (dnf) treats the traditional flat repo as if it was a set of repos that are only enabled when you want that version of the component.

Now Fedora has 25 modules for you to play with, easily shown with dnf module list or at the bottom of dnf list. As the Arbitrary Branching change to dist-git didn’t land in time for Fedora 26, the stream for most of the modules is the typical branch found in dist-git, namely f26. Over time, the modules are expected to actually develop their own streams that most likely follow their upstream communities. There is one example at present, where NodeJS version 8 is being made available in the nodejs-8 stream.

The Bits

“Blah blah, where are my bits,” you ask? The recommended process starts by running the system as a container, found in the Fedora Registry:

docker run --rm -it registry.fedoraproject.org/f26-modular/boltron

Feedback

The Modularity Working Group is interested in your feedback. The group developed a Getting Started page and a general feedback form. However, we could really use your specific feedback about the interactions with the tools and would love it if you could try our walk through.

You can find more general Modularity documentation, including how to build a module, at our docs site. The proposed Module Packaging Guidelines also appear in the Pagure repo, to ease collaboration before it is promoted to the Wiki after approval. We are recommending users try the container so that we have the opportunity to update it to deal with warts and feedback over the course of Fedora 26.

by Langdon White at July 25, 2017 08:00 AM

July 24, 2017

Fedora Magazine

Easy backups with Déjà Dup

Welcome to part 3 in the series on taking smart backups with duplicity. This article will show how to use Déjà Dup, a GTK+ program to quickly back up your personal files.

Déjà Dup is a graphical frontend to Duplicity. Déjà Dup handles the GPG encryption, scheduling and file inclusion for you, presenting a clean and simple backup tool. From the project’s mission:

Déjà Dup aims squarely at the casual user. It is not designed for system administrators, but rather the less technically savvy.

It is also not a goal to support every desktop environment under the sun. A few popular environments should be supported, subject to sufficient developer-power to offer tight integration for each.

Déjà Dup integrates very well with GNOME, making it an excellent choice for a quick backup solution for Fedora Workstation.

Installing Déjà Dup

Déjà Dup can be found in GNOME Software’s Utilities category.


Alternatively, Déjà Dup can be installed with dnf:

dnf install deja-dup

Once installed, launch Déjà Dup from the Overview.

Déjà Dup presents 5 sections to configure your backup.

  • Overview
  • Folders to save
  • Folders to ignore
  • Storage location
  • Scheduling

Folders to save

Similar to selecting directories for inclusion using duplicity‘s --inclusion option, Déjà Dup stores directories to include in a list. The default includes your home directory. Add any additional folders you wish to back up to this list.

Perhaps your entire home directory is too much to back up. Or parts of your home directory are backed up using version control. In that case, remove “Home” from the list and add just the folders you want to back up. For example, ~/Documents and ~/Projects.

Folders to ignore

These folders are going to be excluded, similar to the --exclude option. Starting with the defaults, add any other folders you wish to exclude. One such directory might be ~/.cache. Consider whether to exclude this carefully. GNOME Boxes stores VM disks inside ~/.cache. If those virtual disks contain data that needs to be backed up, you might want to include ~/.cache after all.

Launch Files and from the Action menu, turn on the Show Hidden Files option. Now in Déjà Dup, click the “plus” button and find Homecache. The list should look like this afterwards:

Storage location

The default storage location is a local folder. This doesn’t meet the “Remote” criteria, so be sure to select an external disk or Internet service such as Amazon S3 or Rackspace Cloud Files. Select Amazon S3 and paste in the access key ID you saved in part 1. The Folder field allows you to customize the bucket name (it defaults to $HOSTNAME).

Scheduling

This section gives you options around frequency and persistence. Switching Automatic backup on will immediately start the deja-dup-monitor program. This program runs inside a desktop session and determines when to run backups. It doesn’t use cron, systemd or any other system scheduler.

Back up

The first time the Back Up utility runs, it prompts you to configure the backup location. For Amazon S3, provide the secret access key you saved in part 1. If you check Remember secret access key, Déjà Dup saves the access key into the GNOME keyring.

Next, you must create a password to encrypt the backup. Unlike the specified GPG keys used by duply, Déjà Dup uses a symmetric cipher to encrypt / decrypt the backup volumes. Be sure to follow good password practices when picking the encryption password. If you check Remember password, your password is saved into the GNOME keyring.

Click Continue and Déjà Dup does the rest. Depending on the frequency you selected, this “Backing up…” window will appear whenever a backup is taking place.

Conclusion

Déjà Dup deviates from the backup profiles created in part 1 and part 2 in a couple specific areas. If you need to encrypt backups using a common GPG key, or need to create multiple backup profiles that run on different schedules, duplicity or duply might be a better choice. Nevertheless, Déjà Dup does an excellent job at making data back up easy and hassle-free.

by Link Dupont at July 24, 2017 08:00 AM

xkcd.com

July 21, 2017

Fedora Magazine

Changing Fedora kernel configuration options

Fedora aims to provide a kernel with as many configuration options enabled as possible. Sometimes users may want to change those options for testing or for a feature Fedora doesn’t support. This is a brief guide to how kernel configurations are generated and how to best make changes for a custom kernel.

Finding the configuration files

Fedora generates kernel configurations using a hierarchy of files. Kernel options common to all architectures and configurations are listed in individual files under baseconfig. Subdirectories under baseconfig can override the settings as needed for architectures. As an example:

$ find baseconfig -name CONFIG_SPI
baseconfig/x86/CONFIG_SPI
baseconfig/CONFIG_SPI
baseconfig/arm/CONFIG_SPI
$ cat baseconfig/CONFIG_SPI
# CONFIG_SPI is not set
$ cat baseconfig/x86/CONFIG_SPI
CONFIG_SPI=y
$ cat baseconfig/arm/CONFIG_SPI
CONFIG_SPI=y

As shown above, CONFIG_SPI is initially turned off for all architectures but x86 and arm enable it.

The directory debugconfig contains options that get enabled in kernel debug builds. The file config_generation lists the order in which directories are combined and overridden to make configs. After you change a setting in one of the individual files, you must run the script build_configs.sh to combine the individual files into configuration files. These exist in kernel-$flavor.config.

When rebuilding a custom kernel, the easiest way to change kernel configuration options is to put them in kernel-local. This file is merged automatically when building the kernel for all configuration options. You can set options to be disabled (# CONFIG_FOO is not set), enabled (CONFIG_FOO=y), or modular (CONFIG_FOO=M) in kernel-local.

Catching and fixing errors in your configuration files

The Fedora kernel build process does some basic checks on configuration files to help catch errors. By default, the Fedora kernel requires that all kernel options are  explicitly set. One common error happens when enabling one kernel option exposes another option that needs to be set. This produces errors related to .newoptions, as an example:

+ Arch=x86_64
+ grep -E '^CONFIG_'
+ make ARCH=x86_64 listnewconfig
+ '[' -s .newoptions ']'
+ cat .newoptions
CONFIG_R8188EU
+ exit 1
error: Bad exit status from /var/tmp/rpm-tmp.6BXufs (%prep)

RPM build errors:
 Bad exit status from /var/tmp/rpm-tmp.6BXufs (%prep)

To fix this error, explicitly set the options (CONFIG_R8188EU in this case) in kernel-local as well.

Another common mistake is setting an option incorrectly. The kernel Kconfig dependency checker silently changes configuration options that are not what it expects. This commonly happens when one option selects another option, or has a dependency that isn’t satisfied. Fedora attempts a basic sanity check that the options specified in tree match what the kernel configuration engine expects. This may produce errors related to mismatches:

+ ./check_configs.awk configs/kernel-4.13.0-i686-PAE.config temp-kernel-4.13.0-i686-PAE.config
+ '[' -s .mismatches ']'
+ echo 'Error: Mismatches found in configuration files'
Error: Mismatches found in configuration files
+ cat .mismatches
Found CONFIG_I2C_DESIGNWARE_CORE=y  after generation, had CONFIG_I2C_DESIGNWARE_CORE=m in Fedora tree
+ exit 1

In this example, the Fedora configuration specified CONFIG_I2C_DESIGNWARE_CORE=m, but the kernel configuration engine set it to CONFIG_I2C_DESIGNWARE_CORE=y. The kernel configuration engine is ultimately what gets used, so the solution is either to change the option to what the kernel expects (CONFIG_I2C_DESIGNWARE_CORE=y in this case) or to further investigate what is causing the unexpected configuration setting.

Once the kernel configuration options are set to your liking, you can follow standard kernel build procedures to build your custom kernel.

by Laura Abbott at July 21, 2017 08:00 AM

xkcd.com

July 19, 2017

Fedora Magazine

Use a DoD smartcard to access CAC enabled websites

By now you’ve likely heard the benefits of two factor authentication. Enabling multi-factor authentication can increase the security of accounts you use to access various social media websites like Twitter, Facebook, or even your Google Account. This post is going to be about a bit more.

The U.S. Armed Services spans millions of military and civilian employees. If you’re a member of these services, you’ve probably been issued a DoD CAC smartcard to access various websites. With the smartcard comes compatibility issues, specific instructions tailored to each operating system, and a host of headaches. It’s difficult to find reliable instructions to access military websites from Linux operating systems. This article shows you how to set up your Fedora system to login to DoD CAC enabled websites.

Installing and configuring OpenSC

First, install the opensc package:

sudo dnf install -y opensc

This package provides the necessary middleware to interface with the DoD Smartcard. It also includes tools to test and debug the functionality of your smartcard.

With that installed, next set it up under the Security Devices section of Firefox. Open the menu in Firefox, and navigate to Preferences -> Advanced.

In the Certificates tab, select Security Devices. From this page select the Load button on the right side of the page. Now set a module name (“OpenSC” will work fine) and use this screen to browse to the location of the shared library you need to use.

Browse to the /lib64/pkcs11/ directory, select opensc-pkcs11.so, and click Open. If you’re currently a “dual status” employee, you may wish to select the onepin-opensc-pkcs11.so shared library. If you have no idea what “dual status” means, carry on and simply select the former package.

Click OK to finish the process.

Now you can navigate to your chosen DoD CAC enabled site and login. You’ll be prompted to enter the PIN for your CAC, then select a certificate to use. If you’re logging into a normal DoD website, select the Authentication certificate. If you’re logging into a webmail service such as https://web.mail.mil, select the Digital Signing certificate. NOTE: “Dual status” personnel should use the Authentication certificate.

by Khris Byrd at July 19, 2017 04:41 PM

xkcd.com

July 17, 2017

Fedora Magazine

Enhancing smart backups with Duply

Welcome to Part 2 in a series on taking smart backups with duplicity. This article builds on the basics of duplicity with a tool called duply.

Duply is a frontend for duplicity that integrates smoothly with recurring tools like cron or systemd. Its headline features are:

  • keeps recurring settings in profiles per backup job
  • automates import/export of keys between profile and keyring
  • enables batch operations eg. backup_verify_purge
  • runs pre/post scripts
  • precondition checking for flawless duplicity operation

The general form for running duply is:

duply PROFILE COMMAND [OPTIONS]

Installation

duply is available in the Fedora repositories. To install it, use the sudo command with dnf:

dnf install duply

Create a profile

duply stores configuration settings for a backup job in a profile. To create a profile, use the create command.

$ duply documents create

Congratulations. You just created the profile 'documents'.
The initial config file has been created as 
'/home/link/.duply/documents/conf'.
You should now adjust this config file to your needs.

IMPORTANT:
  Copy the _whole_ profile folder after the first backup to a safe place.
  It contains everything needed to restore your backups. You will need 
  it if you have to restore the backup from another system (e.g. after a 
  system crash). Keep access to these files restricted as they contain 
  _all_ informations (gpg data, ftp data) to access and modify your backups.

  Repeat this step after _all_ configuration changes. Some configuration 
  options are crucial for restoration.

The newly created profile includes two files: conf and exclude. The main file, conf, contains comments for variables necessary to run duply. Read over the comments for any settings unique to your backup environment. The important ones are SOURCE, TARGET, GPG_KEY and GPG_PW.

To convert the single invocation of duplicity from the first article, split it into 4 sections:

duplicity --name duply_documents --encrypt-sign-key **************** --include $HOME/Documents --exclude '**'  $HOME   s3+http://**********-backup-docs
          [                         OPTIONS                        ] [                 EXCLUDES             ] [SOURCE] [             TARGET           ]

Comment out the lines starting with TARGET, SOURCE, GPG_KEY and GPG_PW by adding # in front of each line. Add the following lines to conf:

SOURCE=/home/link
TARGET=s3+http://**********-backup-docs
GPG_KEY=****************
GPG_PW=************
AWS_ACCESS_KEY_ID=********************
AWS_SECRET_ACCESS_KEY=****************************************

The second file, exclude, stores file paths to include/exclude from the backup. In this case, add the following to $HOME/.duply/documents/exclude.

+ /home/link/Documents
- **

Running duply

Run a backup with the backup command. An example run appears below.

$ duply documents backup
Start duply v2.0.2, time is 2017-07-04 17:14:03.
Using profile '/home/link/.duply/documents'.
Using installed duplicity version 0.7.13.1, python 2.7.13, gpg 1.4.21 (Home: ~/.gnupg), awk 'GNU Awk 4.1.4, API: 1.1 (GNU MPFR 3.1.5, GNU MP 6.1.2)', grep 'grep (GNU grep) 3.0', bash '4.4.12(1)-release (x86_64-redhat-linux-gnu)'.
Autoset found secret key of first GPG_KEY entry 'XXXXXXXXXXXXXXXX' for signing.
Checking TEMP_DIR '/tmp' is a folder and writable (OK)
Test - Encrypt to 'XXXXXXXXXXXXXXXX' & Sign with 'XXXXXXXXXXXXXXXX' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.15349.1499213643_*'(OK)
Backup PUB key 'XXXXXXXXXXXXXXXX' to profile. (OK)
Write file 'gpgkey.XXXXXXXXXXXXXXXX.pub.asc' (OK)
Backup SEC key 'XXXXXXXXXXXXXXXX' to profile. (OK)
Write file 'gpgkey.XXXXXXXXXXXXXXXX.sec.asc' (OK)

INFO:

duply exported new keys to your profile.
You should backup your changed profile folder now and store it in a safe place.


--- Start running command PRE at 17:14:04.115 ---
Skipping n/a script '/home/link/.duply/documents/pre'.
--- Finished state OK at 17:14:04.129 - Runtime 00:00:00.014 ---

--- Start running command BKP at 17:14:04.146 ---
Reading globbing filelist /home/link/.duply/documents/exclude
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Tue Jul  4 14:16:00 2017
Reuse configured PASSPHRASE as SIGN_PASSPHRASE
--------------[ Backup Statistics ]--------------
StartTime 1499213646.13 (Tue Jul  4 17:14:06 2017)
EndTime 1499213646.40 (Tue Jul  4 17:14:06 2017)
ElapsedTime 0.27 (0.27 seconds)
SourceFiles 1205
SourceFileSize 817997271 (780 MB)
NewFiles 1
NewFileSize 4096 (4.00 KB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 1
RawDeltaSize 0 (0 bytes)
TotalDestinationSizeChange 787 (787 bytes)
Errors 0
-------------------------------------------------

--- Finished state OK at 17:14:07.789 - Runtime 00:00:03.643 ---

--- Start running command POST at 17:14:07.806 ---
Skipping n/a script '/home/link/.duply/documents/post'.
--- Finished state OK at 17:14:07.823 - Runtime 00:00:00.016 ---

Remember duply is a wrapper around duplicity. Because you specified –name during the backup creation in part 1, duply picked up the local cache for the documents profile. Now duply runs an incremental backup on top of the full one created last week.

Restoring a file

duply offers two commands for restoration. Restore the entire backup with the restore command.

$ duply documents restore ~/Restore
Start duply v2.0.2, time is 2017-07-06 22:06:23.
Using profile '/home/link/.duply/documents'.
Using installed duplicity version 0.7.13.1, python 2.7.13, gpg 1.4.21 (Home: ~/.gnupg), awk 'GNU Awk 4.1.4, API: 1.1 (GNU MPFR 3.1.5, GNU MP 6.1.2)', grep 'grep (GNU grep) 3.0', bash '4.4.12(1)-release (x86_64-redhat-linux-gnu)'.
Autoset found secret key of first GPG_KEY entry 'XXXXXXXXXXXXXXXX' for signing.
Checking TEMP_DIR '/tmp' is a folder and writable (OK)
Test - Encrypt to 'XXXXXXXXXXXXXXXX' & Sign with 'XXXXXXXXXXXXXXXX' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.12704.1499403983_*'(OK)

--- Start running command RESTORE at 22:06:24.368 ---
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Jul 6 21:46:01 2017
--- Finished state OK at 22:06:44.216 - Runtime 00:00:19.848 ---

Restore a single file or directory with the fetch command.

$ duply documents fetch Documents/post_install ~/Restore
Start duply v2.0.2, time is 2017-07-06 22:11:11.
Using profile '/home/link/.duply/documents'.
Using installed duplicity version 0.7.13.1, python 2.7.13, gpg 1.4.21 (Home: ~/.gnupg), awk 'GNU Awk 4.1.4, API: 1.1 (GNU MPFR 3.1.5, GNU MP 6.1.2)', grep 'grep (GNU grep) 3.0', bash '4.4.12(1)-release (x86_64-redhat-linux-gnu)'.
Autoset found secret key of first GPG_KEY entry 'XXXXXXXXXXXXXXXX' for signing.
Checking TEMP_DIR '/tmp' is a folder and writable (OK)
Test - Encrypt to 'XXXXXXXXXXXXXXXX' & Sign with 'XXXXXXXXXXXXXXXX' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.14438.1499404312_*'(OK

--- Start running command FETCH at 22:11:52.517 ---
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Jul 6 21:46:01 2017
--- Finished state OK at 22:12:44.447 - Runtime 00:00:51.929 ---

duply includes quite a few commands. Read the documentation for a full list of commands.

Other features

Timer runs become easier with duply than with duplicity. The systemd user session lets you create automated backups for your data. To do this, modify ~/.config/systemd/user/backup.service, replacing ExecStart=/path/to/backup.sh with ExecStart=duply documents backup. The wrapper script backup.sh is no longer required.

duply also makes a great tool for backing up a server. You can create system wide profiles inside /etc/duply to backup any part of a server. Now with a combination of system and user profiles, you’ll spend less time worrying about your data being backed up.

by Link Dupont at July 17, 2017 08:00 AM

xkcd.com

July 14, 2017

Fedora Magazine

What’s new in the Anaconda Installer for Fedora 26 ?

Fedora 26 is available now, providing a wide range of improvements across the entire operating system. Anaconda — the Fedora installer — has many new features and improvements implemented for Fedora 26. The most visible addition is the introduction of Blivet GUI, providing power users an alternate way to configure partitioning. Additionally, there are improvements to automated installation with kickstart, a range of networking improvements, better status reporting when your install is under way, and much more.

Enhanced storage configuration with Blivet GUI

The main highlight of Anaconda in Fedora 26 is the integration of Blivet GUI storage configuration tool into the installation environment. Previously, there were two options for storage configuration — automatic partitioning and manual partitioning. Automatic partitioning is mostly useful for really simple configurations, like installing Fedora on an empty hard drive or alongside of another existing operating system. The existing manual partitioning tool provides more control over partition layouts and size, enabling more complicated setups.

The previously-available manual partitioning tool is quite unique. Instead of creating all the storage components manually, the user just specifies future mountpoints and their properties. For example, you simply create two “mountpoints” for /home and / (root), specify properties like encryption or RAID and Anaconda properly configures all the necessary components below. This top-down model is really powerful and easy to use but might be too simple for some complicated storage setups.

This is where Blivet GUI can help — it is a storage configuration tool that works in the standard way. If you want an LVM storage on top of RAID, you need to create it manually from the building blocks — from bottom up. With a good knowledge of custom partitioning layouts, complicated custom storage setups are easily created with BlivetGUI.

Using Blivet GUI in Anaconda

Blivet GUI has been available from Fedora repositories as a standalone desktop application since Fedora 21 and now comes also to Anaconda as a third option for storage configuration. Simply choose Advanced Custom (Blivet-GUI) from the Installation Destination window in Anaconda.

Installation Destination window in Anaconda

Installation Destination window in Anaconda

Blivet GUI has full integration into the Anaconda installation workflow. Only the selected disks in the Installation Destination window show in BlivetGUI. Changes remain unwritten to the disks until you leave the window and choose Begin Installation. Additionally, you can always go back and use one of the other partitioning methods. However, Blivet GUI discards changes if you switch to a different partitioning method.

Storage configuration using Blivet GUI in Anaconda

Storage configuration using Blivet GUI in Anaconda

Adding a new device using Blivet GUI in Anaconda

Adding a new device using Blivet GUI in Anaconda

Automated install (kickstart) improvements

Kickstart is the configuration file format for automation of the installation process. A Kickstart file can configure all options available in the graphical and text interfaces and much more. View the Kickstart documentation for more information about Kickstart and how to use it.

Support for –nohome, –noswap and –noboot options in auto partitioning

When you don’t want to specify your partitions, you can let anaconda do that for you with the autopart command. It will automatically create a root partition, a swap partition and a boot partition. With Fedora Workstation, the installed creates a /home partition on large enough drives. To make the auto partitioning more flexible, anaconda now supports the –nohome, –noswap and –noboot options that disable the creation of the given partition.

Strict validation of the kickstart file with inst.ksstrict

It is not uncommon for sysadmins to have complicated kickstart files, and sometimes kickstart files have errors. At the beginning of the installation, anaconda checks the kickstart file and produces errors and warnings. Errors result in the termination of the installation.The log records warnings and the installation continues.

To ensure a kickstart file doesn’t produce any warnings, enable the new strict validation with the boot option inst.ksstrict. This treats warnings in kickstart in the same way as errors.

Snapshot support

Sometimes it is helpful to save an old installation or have a backup of freshly installed system for recovery. For these situations the snaphost kickstart command is now available. View the pykickstart documentation for full usage instructions of this new command. This feature is currently supported on LVM thin pools only. To request support for other partition types please file an RFE bug on bugzilla.

Networking improvements

Networking is a critical part of Anaconda as many installations are partially or fully network based. Anaconda also supports installation to network attached storage devices such as iSCSI, FCoE and Multipass. For this reason Anaconda needs to be able to support complex networking setups not only to even just start the installation but also to correctly setup networking for the installed system.

For this Fedora cycle we have mostly bug fixes, adaptation to NetworkManager rebase, enhancements to the network kickstart tests suite to discover issues caused by NM changes and some changes in components we are using in general. Also adding support for various – mostly enterprise driven – features:

  • Support for IPoIB (IP over infiniband) devices in TUI.
  • Support for setting up bridge device at early stage of installation (eg to fetch kickstart).
  • New inst.waitfornet boot option for waiting for connectivity at later stage of installation in cases where default waiting for DHCP configuration is not sufficient due to special network environment (DHCP servers) setup.

Other improvements

Anaconda and Pykickstart documentation on Read the Docs

The Anaconda and Pykickstart documentation have a new home on ReadTheDocs:

Also the Pykickstart documentation now contains full detailed kickstart command reference, both for Fedora and RHEL.

Progress reporting for all installation phases

Do you also hate it when Anaconda says “processing post installation setup tasks” for many minutes (or even tens of minutes!) without any indication what’s actually going on and how much longer it might take?

The cause of the previous lack of status reporting was simple – during the final parts of the RPM installation transaction RPM post & posttrans scriptlets are running and that can take a significant amount of time. And until recently there was no support from RPM and DNF for progress reporting from this installation phase.

But this has been rectified, RPM & DNF now provide the necessary progress reporting, so Anaconda can finally report what’s actually happening during the full installation run. 🙂

Run Initial Setup TUI on all usable consoles

Initial Setup is a utility to configure a freshly installed system on the first start. Initial Setup provides both graphical and text-mode interfaces and is basically just a launcher for the configuration screens normally provided by Anaconda.

During a “normal” installation everything is configured in Anaconda and Initial Setup does not run. However, the situation is different for the various ARM boards supported by Fedora. Here the installation step is generally skipped and users boot from a Fedora image on an SD card. In this scenario Initial Setup is a critical component, enabling users to customize the pre-made system image as needed.

The Initial Setup text interface (TUI) is generally used on ARM systems. During the Fedora 25 time frame two nasty issues showed up:

  • some ARM board have both serial and graphical consoles with no easy way detect which the user is using
  • some ARM board consoles appear functional, but throw errors when Initial Setup tries to run the TUI on them

To solve these issues, the Initial Setup TUI is run on all consoles that appear to be usable. This solves the first issue – the TUI will run on both the serial and graphical consoles. It also solves the second issue, as consoles that fail to work as expected are simply skipped.

Built in help is now also available for the TUI

Previously, only the graphical installation mode featured help. However, help is accessible in the TUI from every screen that offers the ‘h to help’ option.

Help displayed in the TUI

Help displayed in the TUI

New log-capture script

The new log-capture script is an addition from community contributor Pat Riehecky. This new script makes it easy to gather many installation relevant log files into a tarball, which is easily transferred outside of the installation environment for detailed analysis.

The envisioned use case is running the log-capture script in kickstart %onerror scriptlets.

 

Structured installation tasks

Anaconda does a lot of things during the installation phase (configures storage, installs packages, creates users & groups, etc.). To make the installation phase easier to monitor and to debug any issues the individual installation tasks are now distinct units (eq. user creation, user group creation, root user configuration.) that can be part of task groups (eq. user & group configuration).

End result – it is now easy to see in the logs how long each task took to execute, which task is currently running & how many tasks still need to be executed until the installation is done.

User interaction config file

Anaconda supports the new user interaction config file. A special configuration file to record the screens and (optionally) the settings manipulated by the user.

The main idea behind the user interaction config file is that a user generally comes into contact with multiple separate applications (Anaconda, Gnome Initial Setup, Initial Setup, a hypothetical language selector on a live CD, etc.) during an installation run and it would make sense to only present each configuration option (say language or timezone selection) only once and not multiple times. This should help to reduce the amount of screens a user needs to click through, making the installation faster.

Anaconda will record visited screens and will hide screens marked as visited in an existing user interaction config file. But once other pre & post installations tools (such as for example Gnome Initial Setup) start picking up support it should be easy to spot as users should no longer be asked to configure the same setting twice. But we might not have to wait for long as a Fedora 27 change proposal for adding Gnome Initial Setup support already exists.

by Martin Kolman at July 14, 2017 08:00 AM

xkcd.com

July 12, 2017

Fedora Magazine

Introducing the Python Classroom Lab

The Fedora Labs are a selection of curated bundles of purpose-driven software and content as curated and maintained by members of the Fedora Community. Some of the current Labs are the Design Suite, the Security Lab, and the Robotics Suite. The recent release of Fedora 26 includes the brand new Python Classroom Lab.

The new Python Classroom Lab is a ready-to-use operating system for teachers to use Fedora in their classrooms. The Lab comes pre-installed with a bunch of useful tools for teaching Python. The Python Classroom has 3 variants: a live image based on the GNOME desktop, a Vagrant image, and a Docker container.

Multiple Python interpreters

The Lab includes several python interpreters by default, including CPython 3.6, CPython 2.7 and PyPy 3.3. Additionally, tox is available by default to assist running Python code on the different Python implementations.

Tools, Libraries, and Applications

The Python Classroom Lab also includes a range of tools, libraries, and applications that are useful when learning Python. The Scientific Python stack provides Python libraries for scientific computation and visualization. IPython — an enhanced Python Shell — is also installed by default. Additionally, Jupyter Notebook is included, providing a web-based environment for interactive computing and visualizations. Other tools and applications include virtualenv, the Ninja IDE, and the Python Integrated Development and Learning Environment (IDLE)

by Ryan Lerch at July 12, 2017 09:12 AM

xkcd.com

July 11, 2017

Fedora Magazine

Fedora 26 is here!

[This message comes from the desk of the Fedora Project Leader directly. Happy release day! — Ed.] 

Hi everyone! I’m incredibly proud to announce the immediate availability of Fedora 26. Read more below, or just jump to download from:

If you’re already using Fedora, you can upgrade from the command line or using GNOME Software — upgrade instructions here. We’ve put a lot of work into making upgrades easy and fast. In most cases, this will take half an hour or so, bringing you right back to a working system with no hassle.

What’s new in Fedora 26?

First, of course, we have thousands of improvements from the various upstream software we integrate, including new development tools like GCC 7, Golang 1.8, and Python 3.6. We’ve added a new partitioning tool to Anaconda (the Fedora installer) — the existing workflow is great for non-experts, but this option will be appreciated by enthusiasts and sysadmins who like to build up their storage scheme from basic building blocks. F26 also has many under-the-hood improvements, like better caching of user and group info and better handling of debug information. And the DNF package manager is at a new major version (2.5), bringing many new features. Really, there’s new stuff everywhere — read more in the release notes.

So many Fedora options…

Fedora Workstation is built on GNOME (now version 3.24). If you’re interested in other popular desktop environments like KDE, Xfce, Cinnamon, and more, check out Fedora Spins. Or, for versions of Fedora tailored to special use cases like Astronomy, Design, Security, or Robotics, see Fedora Labs. STEM teachers, take advantage of the new Python Classroom, which makes it a breeze to set up an instructional environment with Vagrant, Docker containers, a Live USB image, or traditional installation.

If you want a Fedora environment to build on in EC2, OpenStack, and other cloud environments, there’s the Fedora Cloud Base. Plus, we’ve got network installers, other architectures (like Power and aarch64), BitTorrent links, and more at Fedora Alternative Downloads. And, not to be forgotten: if you’re looking to put Fedora on a Raspberry Pi or other ARM device, get images from the Fedora ARM page.

Whew! Fedora makes a lot of stuff! I hope there’s something for everyone in all of that, but if you don’t find what you want, you can Join the Fedora Project and work with us to create it. Our mission is to build a platform which enables contributors and other developers to solve all kinds of user problems, on our foundations of Freedom, Friendship, Features, and First. If the problem you want to solve isn’t addressed, Fedora can help you fix that.

Coming soon

Meanwhile, we have many interesting things going on in Fedora behind the scenes. Stay tuned later this week for Fedora Boltron, a preview of a new way to put together Fedora Server from building blocks which move at different speeds. (What if my dev stack was a rolling release on a stable base? Or, could I get the benefits from base platform updates while keeping my web server and database at known versions?) We’re also working on a big continuous integration project focused on Fedora Atomic, automating testing so developers can work rapidly without breaking things for others.

Thanks to the whole Fedora community!

Altogether, I’m confident that this is the best Fedora release ever — yet again. That’s because of the dedication, hard work, and love from thousands of Fedora contributors every year. This is truly an amazing community project from an amazing group of people. This time around, thanks are particularly due to everyone from quality assurance and release engineering who worked over the weekend and holidays to get Fedora 26 to you today.

Oh, and one more thing… in the human world, even the best release ever can’t be perfect. There are always corner cases and late-breaking issues. Check out Common F26 Bugs if you run into something strange. If you find a problem, help us make things better. But mostly, enjoy this awesome new release.

 

— Matthew Miller, Fedora Project Leader

by Matthew Miller at July 11, 2017 02:00 PM

Upgrading Fedora 25 to Fedora 26

Fedora 26 was just officially released. You’ll likely want to upgrade your system to the latest version of Fedora. Fedora offers a command-line method for upgrading Fedora 25 to Fedora 26. The Fedora 25 Workstation also has a graphical upgrade method.

Upgrading Fedora 25 Workstation to Fedora 26

Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the GNOME Software app. Or you can choose Software from GNOME Shell.

Choose the Updates tab in GNOME Software and you should see a window like this:

If you don’t see anything on this screen, try using the reload tool at the top left. It may take some time after release for all systems to be able to see an upgrade available.

Choose Download to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.

Using the command line

If you’ve upgraded from past Fedora releases, you are likely familiar with the dnf upgrade plugin. This method is the recommended and supported way to upgrade from Fedora 25 to Fedora 26. Using this plugin will make your upgrade to Fedora 26 simple and easy.

1. Update software and back up your system

Before you do anything, you will want to make sure you have the latest software for Fedora 25 before beginning the upgrade process. To update your software, use GNOME Software or enter the following command in a terminal.

sudo dnf upgrade --refresh

Additionally, make sure you back up your system before proceeding. For help with taking a backup, see the backup series on the Fedora Magazine.

2. Install the DNF plugin

Next, open a terminal and type the following command to install the plugin:

sudo dnf install dnf-plugin-system-upgrade

3. Start the update with DNF

Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:

sudo dnf system-upgrade download --releasever=26

This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the –allowerasing flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.

Upgrading to Fedora 26: Starting upgrade

 

4. Reboot and upgrade

Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:

sudo dnf system-upgrade reboot

Your system will restart after this. Many releases ago, the fedup tool would create a new option on the kernel selection / boot screen. With the dnf-plugin-system-upgrade package, your system reboots into the current kernel installed for Fedora 25; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.

Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 26 system.

Upgrading Fedora: Upgrade in progress

Upgrading Fedora: Upgrade complete!

Resolving upgrade problems

On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the DNF system upgrade wiki page for more information on troubleshooting in the event of a problem.

If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.

Using Fedora Atomic Host?

Please see the Atomic host upgrade instructions.

Further information

For more detailed instructions on using dnf for upgrading, including a breakdown of other flags, check out the DNF system upgrade wiki article. This page also has frequently asked questions you may have during an upgrade.

Happy upgrades!

by Justin W. Flory at July 11, 2017 01:59 PM

What’s New in Fedora 26 Workstation

Fedora 26 Workstation is the latest release of our free, leading-edge operating system. You can download it from the official website here right now. There are several new and noteworthy changes in Fedora Workstation.

GNOME 3.24

Fedora Workstation features the newest version of the GNOME desktop. GNOME now has a Natural Light Filter feature that changes your display’s color temperature. It works based on the time of day and helps prevent sleeplessness and eye strain.

There are updates to the Settings panel for online accounts, printers, and users. The notifications area sports a cleaner, simpler layout, with integrated weather information.

For developers, Builder now features improved support for systems like Flatpak, CMake, Meson, and Rust. It also integrates Valgrind to help profile your project. There are numerous other improvements, which you can find in the GNOME 3.24 release notes.

Improved Qt app compatibility

The Adwaita theme contains many improvements and looks closer to its GTK counterpart than ever. There are also two variants ported to Qt, dark and high contrast. If you switch to dark or high contrast Adwaita, your Qt apps will switch as well.

LibreOffice 5.3

The latest version of the popular office suite features many changes. It includes a preview of the experimental new NotebookBar UI. There’s also a new internal text layout engine to ensure consistent text layout on all platforms.

Fedora Media Writer

The new version of the Fedora Media Writer can create bootable SD cards with Fedora for ARM devices such as Raspberry Pi. It also features better support for Windows 7 and screenshot handling. The utility also notifies you when a new release of Fedora is available.

Other notes

These are only some of the improvements in Fedora 26. Fedora also gives you access to thousands of software apps our community provides. Many have been updated since the previous release as well.

Fedora 26 is available now for download.

by Paul W. Frields at July 11, 2017 01:50 PM

July 10, 2017

Fedora Magazine

Taking smart backups with Duplicity

Backing up data is one of the most important tasks everyone should be doing regularly. This series will demonstrate using three software tools to backup your important data.

When planning a backup strategy, consider the “Three Rs of backup”:

  • Redundant: Backups must be redundant. Backup media can fail. The backup storage site can be compromised (fire, theft, flood, etc.). It’s always a good idea to have more than one destination for backup data.
  • Regular: Backups only help if you run them often. Schedule and run them regularly to keep adding new data and prune off old data.
  • Remote: At least one backup should be kept off-site. In the case of one site being physically compromised (fire, theft, flood, etc.), the remote backup becomes a fail-safe.

duplicity is an advanced commandline backup utility built on top of librsync and GnuPG. By producing GPG-encrypted backup volumes in tar-format, it offers secure incremental archives (a huge space saver, especially when backing up to remote services like S3 or an FTP server).

To get started, install duplicity:

dnf install duplicity

Choose a backend

duplicity supports a lot of backend services categorized into two groups: hosted storage providers and local media. Selecting a backend is mostly a personal preference,  but select at least two (Redundant). This article uses an Amazon S3 bucket as an example backend service.

Set up GnuPG

duplicity encrypts volumes before uploading them to the specified backend using a GnuPG key. If you haven’t already created a GPG key, follow GPG key management, part 1 to create one. Look up the long key ID and keep it nearby:

gpg2 --list-keys --keyid-format long me@mydomain.com

Set up Amazon AWS

AWS recommends using individual accounts to isolate programmatic access to your account. Log into the AWS IAM Console. If you don’t have an AWS account, you’ll be prompted to create one.

Click on Users in the list of sections on the left. Click the blue Add user button. Choose a descriptive user name, and set the Access type to Programmatic access only. There is no need for a backup account to have console access.

Next, attach the AmazonS3FullAccess policy directly to the account. duplicity needs this policy to create the bucket automatically the first time it runs.


After the user is created, save the access key ID and secret access key. They are required by duplicity when connecting to S3.

Choose backup data

When choosing data to back up, a good rule of thumb is to back up data you’ve created that can’t be re-downloaded from the Internet. Good candidates that meet this criteria are ~/Documents and ~/Pictures. Source code and “dot files” are also excellent candidates if they aren’t under version control.

Create a full backup

The general form for running duplicity is:

duplicity [OPTIONS] SRC DEST

In order to backup ~/Documents, but preserve the Documents folder within the backup volume, run duplicity with $HOME as the source, specify the –include option to include only ~/Documents, and exclude everything else with –exclude ‘**’. The –include and –exclude options can be combined in various ways to create specific file matching patterns. Experiment with these options before creating the initial backup. The –dry-run option simulates running duplicity. This is a great way to preview what a particular duplicity invocation will do.

duplicity will automatically determine whether a full or incremental backup is needed. The first time you run a source/destination, duplicity creates a full backup. Be sure to first export the access key ID and secret access key as environment variables. The –name option enables forward compatibility with duply (coming in part 2). Specify the long form GPG key ID that should be used to sign and encrypt the backup volumes with –encrypt-sign-key.

$ export AWS_ACCESS_KEY_ID=********************
$ export AWS_SECRET_ACCESS_KEY=****************************************
$ duplicity --dry-run --name duply_documents --encrypt-sign-key **************** --include $HOME/Documents --exclude '**' $HOME s3+http://**********-backup-docs
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
GnuPG passphrase: 
GnuPG passphrase for signing key: 
No signatures found, switching to full backup.
--------------[ Backup Statistics ]--------------
StartTime 1499399355.05 (Thu Jul 6 20:49:15 2017)
EndTime 1499399355.09 (Thu Jul 6 20:49:15 2017)
ElapsedTime 0.05 (0.05 seconds)
SourceFiles 102
SourceFileSize 40845801 (39.0 MB)
NewFiles 59
NewFileSize 40845801 (39.0 MB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 59
RawDeltaSize 0 (0 bytes)
TotalDestinationSizeChange 0 (0 bytes)
Errors 0
-------------------------------------------------

When you’re ready, remove the –dry-run option and start the backup. Plan ahead for the initial backup. It can often be a large amount of data and can take hours to upload, depending on your Internet connection.

After the backup is complete, the AWS S3 Console lists the new full backup volume.

 

Create an incremental backup

Run the same command again to create an incremental backup.

$ export AWS_ACCESS_KEY_ID=********************
$ export AWS_SECRET_ACCESS_KEY=****************************************
$ duplicity --dry-run --name duply_documents --encrypt-sign-key **************** --include $HOME/Documents --exclude '**' $HOME s3+http://**********-backup-docs
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Jul 6 20:50:20 2017
GnuPG passphrase: 
GnuPG passphrase for signing key: 
--------------[ Backup Statistics ]--------------
StartTime 1499399964.77 (Thu Jul 6 20:59:24 2017)
EndTime 1499399964.79 (Thu Jul 6 20:59:24 2017)
ElapsedTime 0.02 (0.02 seconds)
SourceFiles 60
SourceFileSize 40845801 (39.0 MB)
NewFiles 3
NewFileSize 8192 (8.00 KB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 3
RawDeltaSize 0 (0 bytes)
TotalDestinationSizeChange 845 (845 bytes)
Errors 0
-------------------------------------------------

Again, the AWS S3 Console lists the new incremental backup volumes.

Restore a file

Backups aren’t useful without the ability to restore from them. duplicity makes restoration straightforward by simply reversing the SRC and DEST in the general form: duplicity [OPTIONS] DEST SRC.

$ export AWS_ACCESS_KEY_ID=********************
$ export AWS_SECRET_ACCESS_KEY=****************************************
$ duplicity --name duply_documents s3+http://**********-backup-docs $HOME/Restore
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Jul 6 21:46:01 2017
GnuPG passphrase:
$ du -sh Restore/
783M Restore/

This restores the entire backup volume. Specific files or directories are restored using the –file-to-restore option, specifying a path relative to the backup root. For example:

$ export AWS_ACCESS_KEY_ID=********************
$ export AWS_SECRET_ACCESS_KEY=****************************************
$ duplicity --name duply_documents --file-to-restore Documents/post_install s3+http://**********-backup-docs $HOME/Restore
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Tue Jul 4 14:16:00 2017
GnuPG passphrase: 
$ tree Restore/
Restore/
├── files
│ ├── 10-doxie-scanner.rules
│ ├── 99-superdrive.rules
│ └── simple-scan.dconf
└── post_install.sh

1 directory, 4 files

Automate with a timer

The example above is clearly a manual process. “Regular” from the Three R philosophy requires this duplicity command run repeatedly. Create a simple shell script that wraps these environment variables and command invocation.

#!/bin/bash

export AWS_ACCESS_KEY_ID=********************
export AWS_SECRET_ACCESS_KEY=****************************************
export PASSPHRASE=************

duplicity --name duply_documents --encrypt-sign-key **************** --include $HOME/Documents --exclude '**' $HOME s3+http://**********-backup-docs

Notice the addition of the PASSPHRASE variable. This allows duplicity to run without prompting for your GPG passphrase. Save this file somewhere in your home directory. It doesn’t have to be in your $PATH. Make sure  the permissions are set to user read/write/execute only to protect the plain text GPG passphrase.

Now create a timer and service unit to run it daily.

$ cat $HOME/.config/systemd/user/backup.timer
[Unit]
Description=Run duplicity backup timer

[Timer]
OnCalendar=daily
Unit=backup.service

[Install]
WantedBy=default.target
$ cat $HOME/.config/systemd/user/backup.service
[Service]
Type=oneshot
ExecStart=/home/link/backup.sh

[Unit]
Description=Run duplicity backup
$ systemctl --user enable --now backup.timer
Created symlink /home/link/.config/systemd/user/default.target.wants/backup.timer → /home/link/.config/systemd/user/backup.timer.

Conclusion

This article has described a manual process. But the flexibility in creating specific, customized backup targets is one of duplicity’s most powerful features. The duplicity man page has a lot more detail about various options. The next article will build on this one by creating backup profiles with duply, a wrapper program that makes the raw duplicity invocations easier.

by Link Dupont at July 10, 2017 02:36 PM

xkcd.com

July 07, 2017

Fedora Magazine

Clustered computing on Fedora with Minikube

This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered key concepts in Kubernetes. This second post shows you how to build a single-node Kubernetes deployment on your own computer.


Once you have a better understanding of what the key concepts and terminology in Kubernetes are, getting started is easier. Like many programming tutorials, this tutorial shows you how to build a “Hello World” application and deploy it locally on your computer using Kubernetes. This is a simple tutorial because there aren’t multiple nodes to work with. Instead, the only device we’re using is a single node (a.k.a. your computer). By the end, you’ll see how to deploy a Node.js application into a Kubernetes pod and manage it with a deployment on Fedora.

This tutorial isn’t made from scratch. You can find the original tutorial in the official Kubernetes documentation. This article adds some changes that will let you do the same thing on your own Fedora computer.

Introducing Minikube

Minikube is an official tool developed by the Kubernetes team to help make testing it out easier. It lets you run a single-node Kubernetes cluster through a virtual machine on your own hardware. Beyond using it to play around with or experiment for the first time, it’s also useful as a testing tool if you’re working with Kubernetes daily. It does support many of the features you’d want in a production Kubernetes environment, like DNS, NodePorts, and container run-times.

Installation

This tutorial requires virtual machine and container software. There are many options you can use. Minikube supports virtualbox, vmwarefusion, kvm, and xhyve drivers for virtualization. However, this guide will use KVM since it’s already packaged and available in Fedora. We’ll also use Node.js for building the application and Docker for putting it in a container.

Pre-requirements

You can install the prerequisites with this command.

$ sudo dnf install kubernetes libvirt-daemon-kvm kvm nodejs docker

After installing these packages, you’ll need to add your user to the right group to let you use KVM. The following commands will add your user to the group and then update your current session for the group change to take effect.

$ sudo usermod -a -G libvirt $(whoami)
$ newgrp libvirt

Docker KVM drivers

If using KVM, you will also need to install the KVM drivers to work with Docker. You need to add Docker Machine and the Docker Machine KVM Driver to your local path. You can check their pages on GitHub for the latest versions, or you can run the following commands for specific versions. These were tested on a Fedora 25 installation.

Docker Machine
$ curl -L https://github.com/docker/machine/releases/download/v0.12.0/docker-machine-`uname -s`-`uname -m` >/tmp/docker-machine
$ chmod +x /tmp/docker-machine
$ sudo cp /tmp/docker-machine /usr/local/bin/docker-machine
Docker Machine KVM Driver

This installs the CentOS 7 driver, but it also works with Fedora.

$ curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-centos7 >/tmp/docker-machine-driver-kvm
$ chmod +x /tmp/docker-machine-driver-kvm
$ sudo cp /tmp/docker-machine-driver-kvm /usr/local/bin/docker-machine-driver-kvm

Installing Minikube

The final step for installation is getting Minikube itself. Currently, there is no package in Fedora available, and official documentation recommends grabbing the binary and moving it your local path. To download the binary, make it executable, and move it to your path, run the following.

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
$ chmod +x minikube
$ sudo mv minikube /usr/local/bin/

Now you’re ready to build your cluster.

Create the Minikube cluster

Now that you have everything installed and in the right place, you can create your Minikube cluster and get started. To start Minikube, run this command.

$ minikube start --vm-driver=kvm

Next, you’ll need to set the context. Context is how kubectl (the command-line interface for Kubernetes) knows what it’s dealing with. To set the context for Minikube, run this command.

$ kubectl config use-context minikube

As a check, make sure that kubectl can communicate with your cluster by running this command.

$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Build your application

Now that Kubernetes is ready, we need to have an application to deploy in it. This article uses the same Node.js application as the official tutorial in the Kubernetes documentation. Create a folder called hellonode and create a new file called server.js with your favorite text editor.

var http = require('http');

var handleRequest = function(request, response) {
 console.log('Received request for URL: ' + request.url);
 response.writeHead(200);
 response.end('Hello world!');
};
var www = http.createServer(handleRequest);
www.listen(8080);

Now try running your application and running it.

$ node server.js

While it’s running, you should be able to access it on localhost:8080. Once you verify it’s working, hit Ctrl+C to kill the process.

Create Docker container

Now you have an application to deploy! The next step is to get it packaged into a Docker container (that you’ll pass to Kubernetes later). You’ll need to create a Dockerfile in the same folder as your server.js file. This guide uses an existing Node.js Docker image. It exposes your application on port 8080, copies server.js to the image, and runs it as a server. Your Dockerfile should look like this.

FROM node:6.9.2
EXPOSE 8080
COPY server.js .
CMD node server.js

If you’re familiar with Docker, you’re likely used to pushing your image to a registry. In this case, since we’re deploying it to Minikube, you can build it using the same Docker host as the Minikube virtual machine. For this to happen, you’ll need to use the Minikube Docker daemon.

$ eval $(minikube docker-env)

Now you can build your Docker image with the Minikube Docker daemon.

$ docker build -t hello-node:v1 .

Huzzah! Now you have an image Minikube can run.

Create Minikube deployment

If you remember from the first part of this series, deployments watch your application’s health and reschedule it if it dies. Deployments are the supported way of creating and scaling pods. kubectl run creates a deployment to manage a pod. We’ll create one that uses the hello-node Docker image we just built.

$ kubectl run hello-node --image=hello-node:v1 --port=8080

Next, check that the deployment was created successfully.

$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-node   1         1         1            1           30s

Creating the deployment also creates the pod where the application is running. You can view the pod with this command.

$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
hello-node-1644695913-k2314   1/1       Running   0          3

Finally, let’s look at what the configuration looks like. If you’re familiar with Ansible, the configuration files for Kubernetes also use easy-to-read YAML. You can see the full configuration with this command.

$ kubectl config view

kubectl does many things. To read more about what you can do with it, you can read the documentation.

Create service

Right now, the pod is only accessible inside of the Kubernetes pod with its internal IP address. To see it in a web browser, you’ll need to expose it as a service. To expose it as a service, run this command.

$ kubectl expose deployment hello-node --type=LoadBalancer

The type was specified as a LoadBalancer because Kubernetes will expose the IP outside of the cluster. If you were running a load balancer in a cloud environment, this how you’d provision an external IP address. However, in this case, it exposes your application as a service in Minikube. And now, finally, you get to see your application. Running this command will open a new browser window with your application.

$ minikube service hello-node

Minikube: Exposing Hello Minikube application in browser

Congratulations, you deployed your first containerized application via Kubernetes! But now, what if you need to our small Hello World application?

How do we push changes?

The time has come when you’re ready to make an update and push it. Edit your server.js file and change “Hello world!” to “Hello again, world!”

response.end('Hello again, world!');

And we’ll build another Docker image. Note the version bump.

$ docker build -t hello-node:v2 .

Next, you need to give Kubernetes the new image to deploy.

$ kubectl set image deployment/hello-node hello-node=hello-node:v2

And now, your update is pushed! Like before, run this command to have it open in a new browser window.

$ minikube service hello-node

If your application doesn’t come up any different, double-check that you updated the right image. You can troubleshoot by getting a shell into your pod by running the following command. You can get the pod name from the command run earlier (kubectl get pods). Once you’re in the shell, check if the server.js file shows your changes.

$ kubectl exec -it <pod-name> bash

Cleaning up

Now that we’re done, we can clean up the environment. To clear up the resources in your cluster, run these two commands.

$ kubectl delete service hello-node
$ kubectl delete deployment hello-node

If you’re done playing with Minikube, you can also stop it.

$ minikube stop

If you’re done using Minikube for a while, you can unset Minikube Docker daemon that we set earlier in this guide.

$ eval $(minikube docker-env -u)

Learn more about Kubernetes

You can find the original tutorial in the Kubernetes documentation. If you want to read more, there’s plenty of great information online. The documentation provided by Kubernetes is thorough and comprehensive.

Questions, Minikube stories, or tips for beginners? Add your comments below.

by Justin W. Flory at July 07, 2017 02:45 PM

xkcd.com

July 05, 2017

Fedora Magazine

Fedora 26 Workstation Wallpapers

The release of Fedora 26 is just around the corner, and the choices for wallpapers in Fedora Workstation are pretty amazing. In addition to the Fedora 26 Default Wallpaper and the GNOME 3.24 Adwaita background, Fedora Workstation now includes a new set of standard backgrounds.

Default Fedora 26 Wallpaper

Every release, the Fedora Design Team creates a default desktop background for Fedora. The Fedora 26 default is:

In addition to the static wallpaper, an ‘animated’ version is also available. This wallpaper transitions slowly throughout the day:

 

The animated variant of the default wallpaper is not installed by default on Fedora Workstation. To get it, use the command:

sudo dnf install f26-backgrounds-animated

New standard wallpapers

In the past, Fedora included the GNOME set of backgrounds by default in the Fedora Workstation backgrounds chooser. In Fedora 26, the following backgrounds are available by default:

After a fresh install of Fedora Workstation, you will no longer get the GNOME backgrounds by default (other than the Adwaita background). These backgrounds are still available in the repositories in the gnome-backgrounds-extras package. When doing an upgrade to Fedora 26 Workstation, you will get both the new Fedora set, and the GNOME set.

If some of the new standard wallpapers look familiar, that is likely due to the fact that they were all previously Supplemental Wallpapers in previous releases. Looking for even more wallpapers? Check out the Supplemental Wallpapers for Fedora 26, as well as ones for previous releases.

Adwaita Backgrounds

Fedora Workstation also ships with the Adwaita backgrounds that GNOME changes for each release. The standard Adwaita backgrounds for Fedora 26 Workstation (GNOME 3.24) are:

by Ryan Lerch at July 05, 2017 10:35 AM

xkcd.com

July 04, 2017

Fedora Magazine

Add power to your terminal with powerline

A while ago, Fedora Magazine posted this interview with Rackspace architect Major Hayden where he mentioned the powerline utility. If you often use a terminal, you too might find powerline useful. It gives you helpful status information, and helps you stay organized.

For the shell

By default, the shell plugin gives you plenty of helpful data:

  • Login name
  • Local time
  • Current working directory or path. The path is condensed automatically when it grows longer than the terminal width.
  • The number of active background jobs
  • The hostname, when you connect via SSH to a remote system where powerline is installed

This saves you a lot of twiddling with your shell environment and complex scripting! To install the utility, open a terminal and run this command:

sudo dnf install powerline powerline-fonts

The rest of these instructions assume you’re using Fedora’s standard bash shell. If you’re using a different shell, check out the documentation for tips.

Next, configure your bash shell to use powerline by default. Add the following snippet to your ~/.bashrc file:

if [ -f `which powerline-daemon` ]; then
  powerline-daemon -q
  POWERLINE_BASH_CONTINUATION=1
  POWERLINE_BASH_SELECT=1
  . /usr/share/powerline/bash/powerline.sh
fi

To activate the changes, open a new shell or terminal. You should have a terminal that looks like this:

Terminal with powerline running in the bash shell

Try changing directories. Watch how the “breadcrumb” prompt changes to show your current location. Very handy! You’ll also be able to see number of pending background jobs. And if powerline is installed on a remote system, the prompt includes the hostname when you connect via SSH.

For tmux

If you’re a command line junkie, you probably also know tmux. It allows you to split your terminal into many windows and panes, each containing its own session. But the tmux standard status line is not quite as interesting as what powerline provides by default:

  • Window information
  • System load
  • Time and date
  • Hostname, if you’re connected to a remote system via SSH

Therefore, let’s install the plugin:

sudo dnf install tmux-powerline

Now add this line to your ~/.tmux.conf file:

source "/usr/share/tmux/powerline.conf"

Next, remove or comment out any lines in your tmux configuration for status bar length or content. Examples of these settings are status-left, status-right, status-left-length, and status-right-length.

Your user configuration is stored in ~/.tmux.conf. If you don’t have one, copy an example from the web or /usr/share/tmux to ~/.tmux.conf, and then edit.

When you next start tmux, you should see the powerline status bar:

A tmux session with powerline running the status bar

For vim

If you use the vim editor, you’re also in luck. There’s a powerful plugin for vim, too. By default, it shows:

  • Operating mode (normal, insert, replace)
  • Current path and file name
  • Text encodings
  • Document and line positions

To install it, use this command:

sudo dnf install vim-powerline

Now add the following lines to your ~/.vimrc file:

python3 from powerline.vim import setup as powerline_setup
python3 powerline_setup()
python3 del powerline_setup
set laststatus=2 " Always display the statusline in all windows
set showtabline=2 " Always display the tabline, even if there is only one tab
set noshowmode " Hide the default mode text (e.g. -- INSERT -- below the statusline)
set t_Co=256

Now you can start vim and see a spiffy new status line:

Vim running with powerline status

Configuring powerline

No command line utility is complete without configuration options. The configuration in this case isn’t exactly simple, though; it requires you to edit JSON formatted files. But there’s a complete configuration guide available in the official documentation. And since the utility is written in Python, it’s eminently hackable.

When you hack the configuration, it’s usually to add, change, or remove segments. There are plenty of segments available, such as:

  • Content of environment variables
  • Version control system data (such as git branch and status!)
  • Weather
  • …and many more.

To change the status layout in an environment, you create or edit configuration files in your ~/.config/powerline/ folder. These configurations are stored as themes for each plugin. You can use the powerline-lint utility to check your configuration for parsing errors after making changes.

Some changes may require you to reload your session or possibly restart the daemon:

powerline-daemon --replace

Now you can enjoy more sophisticated status data in your favorite tools!

by Paul W. Frields at July 04, 2017 08:00 AM

July 03, 2017

xkcd.com

June 30, 2017

Fedora Magazine

Introduction to Kubernetes with Fedora

This article is part of a short series that introduces Kubernetes. This beginner-oriented series covers some higher level concepts and gives examples of using Kubernetes on Fedora.


The information technology world changes daily, and the demands of building scalable infrastructure become more important. Containers aren’t anything new these days, and have various uses and implementations. But what about building scalable, containerized applications? By itself, Docker and other tools don’t quite cut it, as far as building the infrastructure to support containers. How do you deploy, scale, and manage containerized applications in your infrastructure? This is where tools such as Kubernetes comes in. Kubernetes is an open source system that automates deployment, scaling, and management of containerized applications. Kubernetes was originally developed by Google before being donated to the Cloud Native Computing Foundation, a project of the Linux Foundation. This article gives a quick precursor to what Kubernetes is and what some of the buzzwords really mean.

What is Kubernetes?

Kubernetes simplifies and automates the process of deploying containerized applications at scale. Just like Ansible orchestrates software, Kubernetes orchestrates deploying infrastructure that supports the software. There are various “layers of the cake” that make Kubernetes a strong solution for building resilient infrastructure. It also assists with making systems that can grow at scale. If your application has increasing demands such as higher traffic, Kubernetes helps grow your environment to support increasing demands. This is one reason why Kubernetes is helpful for building long-term solutions for complex problems (even if it’s not complex… yet).

Kubernetes: The high level design

Kubernetes: The high level design. Daniel Smith, Robert Bailey, Kit Merker.

At a high level overview, imagine three different layers.

  • Users: People who deploy or create containerized applications to run in your infrastructure
  • Master(s): Manages and schedules your software across various other machines, for example in a clustered computing environment
  • Nodes: Various machines to support the application, called kubelets

These three layers are orchestrated and automated by Kubernetes. One of the key pieces of the master (not included in the visual) is etcd. etcd is a lightweight and distributed key/value store that holds configuration data. Each node, or kubelet, can access this data in etcd through a HTTP/JSON API interface. The components of communication between master and node such as etcd are explained in the official documentation.

Another important detail not shown in the diagram is that you might have many masters. In a high-availability (HA) set-up, you can keep your infrastructure resilient by having multiple masters in case one happens to go down.

Terminology

It’s important to understand the concepts of Kubernetes before you start to play around with it. There are many core concepts in Kubernetes, such as services, volumes, secrets, daemon sets, and jobs. However, this article explains four that are helpful for the next exercise of building a mini Kubernetes cluster. The four concepts are pods, labels, replica sets, and deployments.

Pods

If you imagine Kubernetes as a Lego® castle, pods are the smallest block you can pick out. By themselves, they are the smallest unit you can deploy. The containers of an application fit into a pod. The pod can be one container, but it can also be as many as needed. Containers in a pod are unique since they share the Linux namespace and aren’t isolated from each other. In a world before containers, this would be similar to running an application on the same host machine.

When the pods share the same namespace, all the containers in a pod:

  • Share an IP address
  • Share port space
  • Find each other over localhost
  • Communicate over IPC namespace
  • Have access to shared volumes

But what’s the point of having pods? The main purpose of pods is to have groups of “helping” containers on the same namespace (co-located) and integrated together (co-managed) along with the main application container. Some examples might be logging or monitoring tools that check the health of your application, or backup tools that act when certain data changes.

In the big picture, containers in a single pod are always scheduled together too. However, Kubernetes doesn’t automatically reschedule them to a new node if the node dies (more on this later).

Labels

Labels are a simple but important concept in Kubernetes. Labels are key/value pairs attached to objects in Kubernetes, like pods. They let you specify unique attributes of objects that actually mean something to humans. You can attach them when you create an object, and modify or add them later. Labels help you organize and select different sets of objects to interact with when performing actions inside of Kubernetes. For example, you can identify:

  • Software releases: Alpha, beta, stable
  • Environments: Development, production
  • Tiers: Front-end, back-end

Labels are as flexible as you need them to be, and this list isn’t comprehensive. Be creative when thinking of how to apply them.

Replica sets

Replica sets are where some of the magic begins to happen with automatic scheduling or rescheduling. Replica sets ensure that a number of pod instances (called replicas) are running at any moment. If your web application needs to constantly have four pods in the front-end and two in the back-end, the replica sets are your insurance that number is always maintained. This also makes Kubernetes great for scaling. If you need to scale up or down, change the number of replicas.

When reading about replica sets, you might also see replication controllers. They are somewhat interchangeable, but replication controllers are older, semi-deprecated, and less powerful than replica sets. The main difference is that sets work with more advanced set-based selectors — which goes back to labels. Ideally, you won’t have to worry about this much today.

Even though replica sets are where the scheduling magic happens to help make your infrastructure resilient, you won’t actually interact with them much. Replica sets are managed by deployments, so it’s unusual to directly create or manipulate replica sets. And guess what’s next?

Deployments

Deployments are another important concept inside of Kubernetes. Deployments are a declarative way to deploy and manage software. If you’re familiar with Ansible, you can compare deployments to the playbooks of Ansible. If you’re building your infrastructure out, you want to make sure it is easily reproducible without much manual work. Deployments are the way to do this.

Deployments offer functionality such as revision history, so it’s always easy to rollback changes if something doesn’t work out. They also manage any updates you push out to your application, and if something isn’t working, it will stop rolling out your update and revert back to the last working state. Deployments follow the mathematical property of idempotence, which means you define your specs once and use them many times to get the same result.

Deployments also get into imperative and declarative ways to build infrastructure, but this explanation is a quick, fly-by overview. You can read more detailed information in the official documentation.

Installing on Fedora

If you want to start playing with Kubernetes, install it and some useful tools from the Fedora repositories.

sudo dnf install kubernetes

This command provides the bare minimum needed to get started. You can also install other cool tools like cockpit-kubernetes (integration with Cockpit) and kubernetes-ansible (provisioning Kubernetes with Ansible playbooks and roles).

Learn more about Kubernetes

If you want to read more about Kubernetes or want to explore the concepts more, there’s plenty of great information online. The documentation provided by Kubernetes is fantastic, but there are also other helpful guides from DigitalOcean and Giant Swarm. The next article in the series will explore building a mini Kubernetes cluster on your own computer to see how it really works.

Questions, Kubernetes stories, or tips for beginners? Add your comments below.

by Justin W. Flory at June 30, 2017 09:42 PM

xkcd.com

June 28, 2017

Fedora黑

Fedora Magazine

Testing modules and containers with Modularity Testing Framework

Fedora Modularity is a project within Fedora with the goal of Building a modular operating system with multiple versions of components on different lifecycles. Fedora 26 features the first look of the modularity vision: the Fedora 26 Boltron Server. However,  if you are jumping into the modularity world, creating and deploying your own modules and containers — your next question may be how to test these artifacts.  The Modularity Testing Framework (MTF) has been designed for testing artifacts such as modules, RPM base repos, containers, and other artifact types. It helps you to write tests easily, and the tests are also independent of the type of the module.

MTF is a minimalistic library built on the existing avocado and behave testing frameworks. enabling developers to enable test automation for various module aspects and requirements quickly. MTF adds basic support and abstraction for testing various module artifact types: RPM based, docker images, ISOs, and more. For detailed information about the framework, and how to use it check out the MTF Documentation.

Installing MTF

The Modularity Testing Framework is available in the official Fedora repositories. Install MTF using the command:

dnf install -y modularity-testing-framework

A COPR is available if you want to use the untested, unstable version. Install via COPR with the commands:

dnf copr enable phracek/Modularity-testing-framework
dnf install -y modularity-testing-framework

Writing a simple test

Creating a testing directory structure

First, create the tests/ directory in the root directory of the module. In the tests/ directory, create a Makefile file:

MODULE_LINT=/usr/share/moduleframework/tools/modulelint/*.py
TESTS=*.py (try not to use “*.py”, but use the test files with names such as sanity1.py sanity2.py... separated by spaces)

CMD=python -m avocado run $(MODULE_LINT) $(TESTS)

#
all:
    generator  # use it in case that tests are defined also in config.yaml file (described below)
    $(CMD)

In the root directory of the module, create a Makefile file containing a section test. For example:

.PHONY: build run default

IMAGE_NAME = memcached

MODULEMDURL=file://memcached.yaml

default: run

build:
    docker build --tag=$(IMAGE_NAME) .

run: build
    docker run -d $(IMAGE_NAME)

test: build
    # used for testing docker image available on Docker Hub. Dockerfile 
    cd tests; MODULE=docker MODULEMD=$(MODULEMDURL) URL="docker.io/modularitycontainers/memcached" make all
    # used for testing docker image available on locally.
    # Dockerfile and relavant files has to be stored in root directory of the module.
    cd tests; MODULE=docker MODULEMD=$(MODULEMDURL) URL="docker=$(IMAGE_NAME)" make all
    # This tests "modules" on local system.
    cd tests; MODULE=rpm MODULEMD=$(MODULEMDURL) URL="https://kojipkgs.fedoraproject.org/compose/latest-Fedora-Modular-26/compose/Server/x86_64/os/" make all

In the tests/ directory, place the config.yaml configuration file for module testing(Do adresare tests umisti configuracni soubor config.yaml pro testovani modulu) . See minimal-config.yaml. For example:

document: modularity-testing
version: 1
name: memcached
modulemd-url: http://raw.githubusercontent.com/container-images/memcached/master/memcached.yaml
service:
    port: 11211
packages:
    rpms:
        - memcached
        - perl-Carp
testdependecies:
    rpms:
        - nc
module:
    docker:
        start: "docker run -it -e CACHE_SIZE=128 -p 11211:11211"
        labels:
            description: "memcached is a high-performance, distributed memory"
            io.k8s.description: "memcached is a high-performance, distributed memory"
        source: https://github.com/container-images/memcached.git
        container: docker.io/modularitycontainers/memcached
    rpm:
        start: /usr/bin/memcached -p 11211 &
        repo:
           - https://kojipkgs.fedoraproject.org/compose/latest-Fedora-Modular-26/compose/Server/x86_64/os/

test:
    processrunning:
        - 'ls  /proc/*/exe -alh | grep memcached'
testhost:
    selfcheck:
        - 'echo errr | nc localhost 11211'
        - 'echo set AAA 0 4 2 | nc localhost 11211'
        - 'echo get AAA | nc localhost 11211'
    selcheckError:
        - 'echo errr | nc localhost 11211 |grep ERROR'

 

Add the simpleTest.py python file, which tests a service or an application, into the tests/ directory:

#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# This Modularity Testing Framework helps you to write tests for modules
# Copyright (C) 2017 Red Hat, Inc.

import socket
from avocado import main
from avocado.core import exceptions
from moduleframework import module_framework


class SanityCheck1(module_framework.AvocadoTest):
    """
    :avocado: enable
    """

    def testSettingTestVariable(self):
        self.start()
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        s.connect(('localhost', self.getConfig()['service']['port']))
        s.sendall('set Test 0 100 4\r\n\n')
        #data = s.recv(1024)
        # print data

        s.sendall('get Test\r\n')
        #data = s.recv(1024)
        # print data
        s.close()

    def testBinExistsInRootDir(self):
        self.start()
        self.run("ls / | grep bin")

    def test3GccSkipped(self):
        module_framework.skipTestIf("gcc" not in self.getActualProfile())
        self.start()
        self.run("gcc -v")

if __name__ == '__main__':
    main()

Running tests

To execute tests from the root directory of the module, type

# run tests from a module root directory
$ sudo make test

The result looks like:

docker build --tag=memcached .
Sending build context to Docker daemon 268.3 kB
Step 1 : FROM baseruntime/baseruntime:latest
---> 0cbcd55844e4
Step 2 : ENV NAME memcached ARCH x86_64
---> Using cache
---> 16edc6a5f7b6
Step 3 : LABEL MAINTAINER "Petr Hracek" <phracek@redhat.com>
---> Using cache
---> 693d322beab2
Step 4 : LABEL summary "High Performance, Distributed Memory Object Cache" name "$FGC/$NAME" version "0" release "1.$DISTTAG" architecture "$ARCH" com.redhat.component $NAME usage "docker run -p 11211:11211 f26/memcached" help "Runs memcached, which listens on port 11211. No dependencies. See Help File below for more details." description "memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load." io.k8s.description "memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load." io.k8s.diplay-name "Memcached 1.4 " io.openshift.expose-services "11211:memcached" io.openshift.tags "memcached"
---> Using cache
---> eea936c1ae23
Step 5 : COPY repos/* /etc/yum.repos.d/
---> Using cache
---> 920155da88d9
Step 6 : RUN microdnf --nodocs --enablerepo memcached install memcached &&     microdnf -y clean all
---> Using cache
---> c83e613f0806
Step 7 : ADD files /files
---> Using cache
---> 7ec5f42c0064
Step 8 : ADD help.md README.md /
---> Using cache
---> 34702988730f
Step 9 : EXPOSE 11211
---> Using cache
---> 577ef9f0d784
Step 10 : USER 1000
---> Using cache
---> 671ac91ec4e5
Step 11 : CMD /files/memcached.sh
---> Using cache
---> 9c933477acc1
Successfully built 9c933477acc1
cd tests; MODULE=docker MODULEMD=file://memcached.yaml URL="docker=memcached" make all
make[1]: Entering directory '/home/phracek/work/FedoraModules/memcached/tests'
Added test (runmethod: run): processrunning
Added test (runmethod: runHost): selfcheck
Added test (runmethod: runHost): selcheckError
python -m avocado run --filter-by-tags=-WIP /usr/share/moduleframework/tools/modulelint.py *.py
JOB ID     : 9ba3a3f9fd982ea087f4d4de6708b88cee15cbab
JOB LOG    : /root/avocado/job-results/job-2017-06-14T16.25-9ba3a3f/job.log
(01/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testDockerFromBaseruntime: PASS (1.52 s)
(02/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testDockerRunMicrodnf: PASS (1.53 s)
(03/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testArchitectureInEnvAndLabelExists: PASS (1.63 s)
(04/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testNameInEnvAndLabelExists: PASS (1.61 s)
(05/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testReleaseLabelExists: PASS (1.60 s)
(06/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testVersionLabelExists: PASS (1.45 s)
(07/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testComRedHatComponentLabelExists: PASS (1.64 s)
(08/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testIok8sDescriptionExists: PASS (1.51 s)
(09/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testIoOpenshiftExposeServicesExists: PASS (1.50 s)
(10/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testIoOpenShiftTagsExists: PASS (1.53 s)
(11/20) /usr/share/moduleframework/tools/modulelint.py:DockerLint.testBasic: PASS (13.75 s)
(12/20) /usr/share/moduleframework/tools/modulelint.py:DockerLint.testContainerIsRunning: PASS (14.19 s)
(13/20) /usr/share/moduleframework/tools/modulelint.py:DockerLint.testLabels: PASS (1.57 s)
(14/20) /usr/share/moduleframework/tools/modulelint.py:ModuleLintPackagesCheck.test: PASS (14.03 s)
(15/20) generated.py:GeneratedTestsConfig.test_processrunning: PASS (13.77 s)
(16/20) generated.py:GeneratedTestsConfig.test_selfcheck: PASS (13.85 s)
(17/20) generated.py:GeneratedTestsConfig.test_selcheckError: PASS (14.32 s)
(18/20) sanity1.py:SanityCheck1.testSettingTestVariable: PASS (13.86 s)
(19/20) sanity1.py:SanityCheck1.testBinExistsInRootDir: PASS (13.81 s)
(20/20) sanity1.py:SanityCheck1.test3GccSkipped: ERROR (13.84 s)
RESULTS    : PASS 19 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME   : 144.85 s
JOB HTML   : /root/avocado/job-results/job-2017-06-14T16.25-9ba3a3f/html/results.html
Makefile:6: recipe for target 'all' failed
make[1]: *** [all] Error 1
Makefile:14: recipe for target 'test' failed
make: *** [test] Error 2
$

To execute tests from the tests/ directory, type:

# run Python tests from the tests/ directory
$ sudo MODULE=docker avocado run ./*.py

The result looks like:

$ sudo MODULE=docker avocado run ./*.py
[sudo] password for phracek:
JOB ID     : 2a171b762d8ab2c610a89862a88c015588823d29
JOB LOG    : /root/avocado/job-results/job-2017-06-14T16.43-2a171b7/job.log
(1/6) ./generated.py:GeneratedTestsConfig.test_processrunning: PASS (24.79 s)
(2/6) ./generated.py:GeneratedTestsConfig.test_selfcheck: PASS (18.18 s)
(3/6) ./generated.py:GeneratedTestsConfig.test_selcheckError: ERROR (24.16 s)
(4/6) ./sanity1.py:SanityCheck1.testSettingTestVariable: PASS (18.88 s)
(5/6) ./sanity1.py:SanityCheck1.testBinExistsInRootDir: PASS (17.87 s)
(6/6) ./sanity1.py:SanityCheck1.test3GccSkipped: ERROR (19.30 s)
RESULTS    : PASS 4 | ERROR 2 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME   : 124.19 s
JOB HTML   : /root/avocado/job-results/job-2017-06-14T16.43-2a171b7/html/results.html

by Petr Hracek at June 28, 2017 10:50 AM

xkcd.com

June 26, 2017

Fedora Magazine

Upcoming Fedora Atomic Host lifecycle changes

The Fedora Project ships new Fedora Server and Workstation releases at roughly six-month intervals. It then maintains each release for around thirteen months. So Fedora N is supported by the community until one month after the release of Fedora N+2. Since the first Fedora Atomic Host shipped, as part of Fedora 21, the project has maintained separate ostree repositories for both active Fedora releases. For instance, there are currently trees available for Fedora Atomic 25 and Fedora Atomic 24.

Fedora Atomic sets out to be a particularly fast-moving branch of Fedora. It provides releases every two weeks and updates to key Atomic Host components such as Docker and Kubernetes. The release moves more quickly than one might expect from the other releases of Fedora.

Due in part to this faster pace, the Fedora Atomic Working Group has always focused its testing and integration efforts most directly on the latest stable release. The group encourages users of the older release to rebase to the newer tree as soon as possible. Releases older than the current tree are supported only on a best effort basis. This means the ostree is updated, but there is no organized testing of older releases.

Upcoming changes

This will change with either the Fedora 26 to 27 or the 27 to 28 upgrade cycle (depending on readiness). The Fedora Atomic Working Group will then collapse Fedora Atomic into a single version. That release will track the latest stable Fedora branch. When a new stable version of Fedora is released, Fedora Atomic users will automatically shift to the new version when they install updates.

Traditional OS upgrades can be disruptive and error-prone. Due to the image-based technologies that Atomic Hosts use for system components (rpm-ostree) and for applications (Linux containers), upgrading an Atomic Host between major releases is like installing updates within a single release. In both scenarios, the system updates are applied by running an rpm-ostree command and rebooting. The release provides rollback to the previous state available in case something goes wrong. Applications running in containers are unaffected by the host upgrade or update.

If you’d like to get involved in the Fedora Atomic Working Group, come talk to us in IRC in #fedora-cloud or #atomic on Freenode, or join the Atomic WG on Pagure.

by jasonbrooks at June 26, 2017 08:00 AM

xkcd.com

June 23, 2017

Fedora Magazine

gThumb: View and manage your photos in Fedora

Fedora uses Eye of GNOME to display images, but it’s a very basic program. Out of the box, Fedora doesn’t have a great tool for managing photos. If you’re familiar with the Fedora Workstation’s desktop environment, GNOME, then you may be familiar with GNOME Photos. This is a young app available in GNOME Software that seeks to make managing photos a painless task. You may not know that there’s a more robust tool out there that packs more features and looks just as at home on Fedora. It’s called gThumb.

What is gThumb?

gThumb is hardly a new piece of software. The program has been around since 2001, though it looks very different now than it did back then. As GNOME has changed, so has gThumb. Today it’s the most feature-rich way of managing images using a GNOME 3 style interface.

While gThumb is an image viewer that you can use to replace Eye of GNOME, that’s only the beginning of what it can do. Thanks to the inclusion of features you would normally find in photo managers like digiKam or the now discontinued Picasa, I use it to view the pictures I capture with my DSLR camera.

How gThumb handles photos

At its core, gThumb is an image viewer. While it can organize your collection, its primary function is to display pictures in the folders they’re already in. It doesn’t move them around. I consider this a plus.

I download images from my camera using Rapid Photo Downloader, which organized and renames files precisely as I want them. All I want from a photo manager is the ability to easily view these images without much fuss.

That’s not to say that gThumb doesn’t offer any of the extra organizational tools you may expect from a photo manager. It comes with a few.

Labeling, grouping, and organizing

Determining your photo’s physical location on your hard drive is only one of many ways to keep up with your images. Once your collection grows, you may want to use tags. These are keywords that can help you mark and recall pictures of a certain type, such as birthdays, visits to the park, and sporting events. To remember details about a specific picture, you can leave a comment.

gThumb lets you save photos to one of three collections, indicated by three flags in the bottom right corner. These groups are color coordinated, with the options being green, red, and blue. It’s up to you to remember which collections correspond with what color.

Alternatively, you can let gThumb organize your images into catalogs. Catalogs can be based on the date images were taken, the date they were edited, or by tags.

It’s also an image editor

gThumb provides enough editing functions to meet most of my needs. It can crop photos, rotate them, and adjust aspects such as contrast, lightness, and saturation. It can also remove red-eye. I still fire up the GIMP whenever I need to do any serious editing, but gThumb is a much faster way of handing the basics.

gThumb is maintained by the GNOME Project, just like Eye of GNOME and GNOME Photos. Each offers a different degree of functionality. Before you walk away thinking that GNOME’s integrated photo viewers are all too basic, give gThumb a try. It has become my favorite photo manager for Linux.

by Bertel King at June 23, 2017 08:00 AM

xkcd.com

June 21, 2017

Fedora Magazine

Controlling Windows via Ansible

For many Linux systems engineers, Ansible has become a way of life. They use Ansible to orchestrate complex deployment processes, to define multiple systems with a quick and simple configuration management tool, or somewhere in between.

However, Microsoft Windows users have generally required a different set of tools to manage systems. They also often needed a different mindset on how to handle them.

Recently Ansible has improved this situation quite a bit. The Ansible 2.3 release included a bunch of new modules for this purpose. Ansible 2.3.1 is already available for Fedora 26.

At AnsibleFest London 2017, Matt Davis, Senior Principal Software Engineer at Ansible, will lead a session covering this topic in some detail. In this article we look at how to prepare Windows systems to enable this functionality along with a few things we can do with it.

Preparing the target systems

There’s a couple of prerequisites that are required to prepare a Windows system to allow ansible to connect. The connection type used for this is “winrm” which is the Windows Remote Management protocol.

When using this ansible executes powershell on the target system. This requires a minimal version of Powershell 3.0 although it’s recommended to install the most recent version of Windows Management Framework, which at the time of writing is 5.1 and includes Powershell 5.1 as part of it.

With that in place the WinRM service needs to be configured on the Windows system. The easiest way to do this is with the ansible powershell script.

By default commands, but not scripts, can be executed by the ExecutionPolicy. However when running the script via the powershell executable this can be bypassed.

Run powershell as an an administrative user and then:

powershell.exe  -ExecutionPolicy Bypass -File ConfigureRemotingForAnsible.ps1 -CertValidityDays 3650 -Verbose

After this the WinRM service will be listening and any user with administrative privileges will be able to authenticate and connect.

Although it’s possible to use CredSSP or Kerberos for delegated (single sign-on) the simplest method just makes use of username and password via NTLM authentication.

To configure the winrm connector itself there’s a few different variables but the bare minimum to make this work for any Windows system will need:

ansible_user: 'localAdminUser'
ansible_password: 'P455w0rd'
ansible_connection: 'winrm'
ansible_winrm_server_cert_validation: 'ignore'

The last line is important with the default self-signed certificates that Windows uses for WinRM, but can be removed if using verified certificates from a central CA for the systems.

So with that in place how flexible is it? How much can really be remotely controlled and configured?

Well step one on the controlling computer is to install ansible and the winrm libraries:

dnf -y install ansible python2-winrm

With that ready there’s a fair number of the core modules avaliable but the majority of tasks are from windows specific modules.

Remote windows updates

Using ansible to define your Windows systems updates allows them to be remotely checked and deployed, whether they come directly from Microsoft or from an internal Windows Server Update Service:

ansible -i mywindowssystem, -c winrm -e ansible_winrm_server_cert_validation=ignore -u administrator -k -m win_updates

mywindowssytem | SUCCESS => {
    "changed": true, 
    "failed_update_count": 0, 
    "found_update_count": 3, 
    "installed_update_count": 3, 
    "reboot_required": true, 
    "updates": {
        "488ad51b-afca-46b9-b0de-bdbb4f56672f": {
            "id": "488ad51b-afca-46b9-b0de-bdbb4f56672f", 
            "installed": true, 
            "kb": [
                "4022726"
            ], 
            "title": "2017-06 Security Monthly Quality Rollup for Windows 8.1 for x64-based Systems (KB4022726)"
        }, 
        "94e2e9ab-e2f7-4f8c-9ade-602a0511cc08": {
            "id": "94e2e9ab-e2f7-4f8c-9ade-602a0511cc08", 
            "installed": true, 
            "kb": [
                "4022730"
            ], 
            "title": "2017-06 Security Update for Adobe Flash Player for Windows 8.1 for x64-based Systems (KB4022730)"
        }, 
        "ade56166-6d55-45a5-9e31-0fac924e4bbe": {
            "id": "ade56166-6d55-45a5-9e31-0fac924e4bbe", 
            "installed": true, 
            "kb": [
                "890830"
            ], 
            "title": "Windows Malicious Software Removal Tool for Windows 8, 8.1, 10 and Windows Server 2012, 2012 R2, 2016 x64 Edition - June 2017 (KB890830)"
        }
    }
}

Rebooting automatically is also possible with a small playbook:

- hosts: windows
  tasks:
    - name: apply critical and security windows updates 
      win_updates:
        category_names: 
          - SecurityUpdates
          - CriticalUpdates
      register: wuout
    - name: reboot if required
      win_reboot:
      when: wuout.reboot_required

Package management

There’s two ways to handle package installs on windows using ansible.

The first is to use win_package which can install any msi or run an executable installer from a network share or uri. This is useful for more locked down internal networks with no internet connectivity or for applications not on Chocolatey. In order to avoid re-running an installer and keep any plays safe to run it’s important to lookup the product ID from the registry so that win_package can detect if it’s already installed.

The second is to use the briefly referenced Chocolatey. There is no setup required for this on the target system as the win_chocolatey module will automatically install the Chocolatey package manager if it’s not already present. To install the Java 8 Runtime Environment via Chocolatey it’s as simple as:

ansible -i mywindowssystem, -c winrm -e ansible_winrm_server_cert_validation=ignore -u administrator -k -m win_chocolatey -a "name=jre8"
mywindowssystem | SUCCESS => {
    "changed": true, 
    "rc": 0
}

And the rest…

The list is growing as ansible development continues so always check the documentation for the up-to-date set of windows modules supported. Of course it’s always possible to just execute raw powershell as well:

ansible -i mywindowssystem, -c winrm -e ansible_winrm_server_cert_validation=ignore -u administrator -k -m win_shell -a "Get-Process"
mywindowssystem | SUCCESS | rc=0 >>

Handles  NPM(K)    PM(K)      WS(K)     CPU(s)     Id  SI ProcessName          
-------  ------    -----      -----     ------     --  -- -----------          
     28       4     2136       2740       0.00   2452   0 cmd                  
     40       5     1024       3032       0.00   2172   0 conhost              
    522      13     2264       5204       0.77    356   0 csrss                
     83       8     1724       3788       0.20    392   1 csrss                
    106       8     1936       5928       0.02   2516   0 dllhost              
     84       9     1412        528       0.02   1804   0 GoogleCrashHandler   
     77       7     1448        324       0.03   1968   0 GoogleCrashHandler64 
      0       0        0         24                 0   0 Idle                 
      

With the collection of modules already available and the help of utilities like Chocolatey it’s already possible to manage the vast majority of the Windows estate with ansible allowing many of the same techniques and best practices already embedded in the Linux culture to make the transition over the fence, even with more complex actions such as joining or creating an Active Directory domain.

ansible -i mywindowssystem, -c winrm -e ansible_winrm_server_cert_validation=ignore -u administrator -k -m win_say -a "msg='I love my ansible, and it loves me'"

by James Hogarth at June 21, 2017 08:00 AM