http://wiki.christophchamp.com/api.php?action=feedcontributions&user=Christoph&feedformat=atomChristoph's Personal Wiki - User contributions [en]2024-03-28T11:43:42ZUser contributionsMediaWiki 1.26.2http://wiki.christophchamp.com/index.php?title=Category:Travel_Log&diff=8287Category:Travel Log2024-02-20T06:36:27Z<p>Christoph: /* Flights */</p>
<hr />
<div>This category will be my, as yet, unorganised '''Travel Log''' to many places around the world. (Note: The following is very much an ''incomplete'' travel log.)<br />
<br />
== Auto ==<br />
<br />
===Berlin trip (2006)===<br />
* Monaco &rarr; Milano &rarr; Ljubljana &rarr; Rotterdam &rarr; Berlin &rarr; Copenhagen &rarr; Monaco: April 2006<br />
: [http://triptracker.net/trip/1165/ TripTracker]<br />
: 1-Apr-2006 (14h20): Monaco &rarr; Milano<br />
: 2-Apr-2006 (23h30): Milano &rarr; Ljubljana<br />
: 3-Apr-2006 &ndash; 5-Apr-2006: Slovenia (Ljubljana, Novo Mesto, Kranj, Postojna, Jesenice, etc.)<br />
: 5-Apr-2006 (12h30): |&larr; Austria (Villach)<br />
: 5-Apr-2006 (15h15): |&larr; Germany<br />
: 5-Apr-2006 (19h15): Stuttgart<br />
: 5-Apr-2006 (20h20): Karlsruhe<br />
: 5-Apr-2006 (23h30): Köln<br />
: 5-Apr-2006 (00h10): |&larr; The Netherlands<br />
: 5-Apr-2006 (02h00): Rotterdam<br />
: 7-Apr-2006 (12h00): |&rarr; Rotterdam<br />
: 7-Apr-2006 (14h45): |&larr; Germany<br />
: 7-Apr-2006 (17h00): Hannover<br />
: 7-Apr-2006 (18h30): Magdeburg<br />
: 7-Apr-2006 (20h00): Berlin<br />
: 8-Apr-2006 (15h30): |&rarr; Berlin<br />
: 8-Apr-2006 (18h00): Rostock<br />
: 8-Apr-2006 (19h30): Ferry (|&rarr; Germany from Rostock Harb.)<br />
: 8-Apr-2006 (21h15): Ferry (|&larr; Denmark at Gedsen)<br />
: 8-Apr-2006 (23h20): København<br />
: 9-Apr-2006 (06h30): |&rarr; København<br />
: 9-Apr-2006 (09h00): Ferry (|&rarr; Denmark from Gedsen)<br />
: 9-Apr-2006 (11h00): Ferry (|&larr; Germany at Rostock Harb.)<br />
: 9-Apr-2006 (13h30): |&larr; Berlin<br />
: 9-Apr-2006 (14h00): |&rarr; Berlin<br />
: 9-Apr-2006 (15h50): Dresden<br />
:10-Apr-2006 (00h45): |&larr; Slovenia<br />
:10-Apr-2006 (01h40): Ljubljana<br />
:10-Apr-2006 (02h40): Postojna<br />
:10-Apr-2006 (13h15): |&larr; Italy<br />
:10-Apr-2006 (15h00): Padova<br />
:10-Apr-2006 (15h40): Verona<br />
:10-Apr-2006 (18h50): Genova<br />
:10-Apr-2006 (20h35): |&larr; France<br />
:10-Apr-2006 (20h45): |&larr; Monaco<br />
<br />
===Canada trip (2001)===<br />
''Note: The total trip covered 11,893 km (7,390 miles).''<br />
*Corvallis, OR &rarr; Boston, MA &rarr; Quebec &rarr; Ontario &rarr; Manitoba &rarr; Saskatchewan &rarr; Alberta &rarr; British Columbia &rarr; Corvallis, OR<br />
** 01-Sep-2001 (??h??): |&rarr; Corvallis, OR<br />
** 06-Sep-2001 (15h45): |&larr; Massachusetts<br />
** 13-Sep-2001 (13h15): |&rarr; Westborough, MA<br />
** 13-Sep-2001 (17h46): Augusta, ME<br />
** 13-Sep-2001 (18h15): |&larr; CANADA (into Quebec)<br />
** 14-Sep-2001 (02h06): Grande Allee Est., Quebec<br />
** 14-Sep-2001 (15h01): Cap-Madeleine, PQ<br />
** 15-Sep-2001 (17h44): Thunder Bay, ON<br />
** 14-Sep-2001 (17h45): |&larr; Ontario<br />
** 14-Sep-2001 (20h03): Cobden, ON<br />
** 15-Sep-2001 (12h02): Sudbury, ON<br />
** 15-Sep-2001 (10h25): Wawa, ON<br />
** 15-Sep-2001 (22h01): Kenora, ON<br />
** 15-Sep-2001 (10h37): |&larr; Manitoba<br />
** 16-Sep-2001 (10h53): Brandon, MB<br />
** 16-Sep-2001 (12h50): |&larr; Saskatchewan<br />
** 16-Sep-2001 (16h09): Herbert, SK<br />
** 16-Sep-2001 (18h06): |&larr; Alberta<br />
** 16-Sep-2001 (23h00): |&larr; British Columbia<br />
** 17-Sep-2001 (00h30): |&larr; USA (into Idaho)<br />
** 17-Sep-2001 (03h36): Coeur d'Alene, ID<br />
** 17-Sep-2001 (05h30): |&larr; Oregon<br />
<br />
===Ireland trip (1999-2000)===<br />
* 26-Dec-1999 (??h??): Dublin, Ireland<br />
* 26-Dec-1999 (16h13): Lord Edward St., Dublin<br />
* 27-Dec-1999 (??h??): Kinlay House, Christchurch, 2-12 Lord Edward St., Dublin, Ireland<br />
* 2?-Dec-1999 (??h??): Kilkenny<br />
* 28-Dec-1999 (12h27): Patrick St., Cork<br />
* 28-Dec-1999 (17h12): Mallow, Co. Cork<br />
* 29-Dec-1999 (??h??): Co. Kerry<br />
* ??-Dec-1999 (??h??): Saratoga House (Bed & Breakfast), Muckross Road, Killarney, Ireland<br />
* 29-Dec-1999 (15h09): Chapel St., Limerick<br />
* 29-Dec-1999 (15h18): Eimear<br />
* 30-Dec-1999 (??h??): Ballybofey<br />
* 30-Dec-1999 (15h51): Greysteel<br />
* 30-Dec-1999 (??h??): O'Connell St., Sligo<br />
* 30-Dec-1999 (??h??): Petra, Galway<br />
* 30-Dec-1999 (??h??): Sligo<br />
* 30-Dec-1999 (??h??): The Linen House Backpackers Hostel, 18-20 Kent Street, Belfast, Ireland<br />
* 01-Jan-2000 (14h46): Arthur Sq., Belfast<br />
* 02-Jan-2000 (06h34): Dublin Airport<br />
<br />
===Miscellaneous (Europe)===<br />
* Budapest, Hungary &rarr; Dubrovnik, Croatia: June/July 2018 (round-trip)<br />
* ''The Cliffs of Møn'', DK: Oct-2005<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Chiemsee, Germany: Oct-1996 (round-trip)<br />
* Zagreb, Croatia &rarr; Ljubjlana, Slovenia &rarr; Graz, Austria &rarr; Budapest, Hungary: Sep-1996<br />
* Zagreb, Croatia &rarr; Ljubljana, Slovenia: Sep-1996 (round-trip)<br />
* Budapest, Hungary &rarr; Zagreb, Croatia: Sep-1996<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Berchtesgaden, Germany &rarr; Innsbruck, Austria &rarr; Liechtenstein &rarr; Switzerland: Aug-1996 (round-trip)<br />
* Warsaw, Poland &rarr; Budapest, Hungary: September 1994<br />
* Budapest, Hungary &rarr; Slovakia (11-Nov-1993) &rarr; Warsaw, Poland: November 1993<br />
* Vienna, Austria &rarr; Budapest, Hungary: 28-Sep-1993<br />
<br />
===Miscellaneous (South America)===<br />
* Cuenca, Ecuador &rarr; Riobamba, Ecuador &rarr; Ambato, Ecuador &rarr; Quito, Ecuador: 1993 (round-trip)<br />
* Quito, Ecuador &#187; Ipiales, Colombia: 1993 (round-trip)<br />
* Guayaquil, Ecuador &rarr; Santo Domingo de Los Colorados, Ecuador &rarr; Quito, Ecuador: 1993<br />
* Guayaquil, Ecuador &rarr; Salinas, Ecuador: 1993 (round-trip)<br />
* Tumbes, Peru &rarr; Guayaquil, Ecuador: 21-Dec-1992<br />
<br />
===Miscellaneous (North America)===<br />
* Seattle, WA &#187; Chelan, WA &#187; Seattle, WA: July 2023 (576 km/358 mi)<br />
* Seattle, WA &#187; Cle Elum, WA &#187; Chelan, WA &#187; Republic, WA &#187; Leavenworth, WA &#187; Monroe, WA &#187; Seattle, WA: April 2023 (933 km/580 mi)<br />
* Seattle, WA &#187; Winthrop, WA &#187; Leavenworth, WA &#187; Issaquah, WA &#187; Seattle, WA: June 2022<br />
* Seattle, WA &#187; Winthrop, WA &#187; Tiger, WA &#187; Spokane, WA &#187; Seattle, WA: May 2022 (1,200 km/744 mi)<br />
* Seattle, WA &#187; Portland, OR &#187; Grants Pass, OR &#187; Crescent City, CA &#187; Redwood National Forest &#187; Newport, OR &#187; Astoria, OR &#187; Elma, WA &#187; Seattle, WA: November 2021 (1,881 km/1,169 mi)<br />
* Seattle, WA &#187; Mt Saint Helens &#187; Mt Adams &#187; Stonehenge Memorial &#187; Multnomah Falls &#187; Seattle, WA: September 2021 (914 km/568 mi)<br />
* Seattle, WA &#187; Walla Walla, OR &#187; Joseph, OR &#187; Lewiston, ID &#187; Grand Coulee, WA &#187; Seattle, WA: June 2021 (1,421 km/883 mi)<br />
* Seattle, WA &#187; Pendleton, OR &#187; Craters of the Moon National Monument & Preserve &#187; Idaho Springs, ID &#187; Jackson, WY &#187; Grand Teton National Park &#187; Yellowstone National Park &#187; Missoula, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: September 2020 (2,746 km/1,706 mi)<br />
* Seattle, WA &#187; Coeur d'Alene, ID &#187; Missoula, MT &#187; Glacier National Park, MT &#187; Seattle, WA: July 2019 (1,984 km/1,233 mi)<br />
* Seattle, WA &#187; Corvallis, OR: November 2018 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2017 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2016 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2015 (round-trip)<br />
* Texas &#187; Oklahoma &#187; Kansas &#187; Nebraska &#187; South Dakota &#187; Wyoming &#187; Montana &#187; Idaho &#187; Seattle, WA: September 2015 (4,000 km/4,290 mi)<br />
* Seattle, WA &#187; Oregon &#187; Idaho &#187; Utah &#187; Wyoming &#187; Colorado &#187; Kansas &#187; Oklahoma &#187; Texas: 11-16 May 2013<br />
* Seattle, WA &#187; Port Angeles, WA &#187; Hurricane Ridge, WA: 28-Dec-2012 (round-trip)<br />
* Seattle, WA &#187; Portland, OR: 4-Dec-2012 (round-trip)<br />
* Chicago, IL &#187; Milwaukee, WI &#187; Minneapolis, MN &#187; Fargo, ND &#187; Billings, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: 25-26 June 2012 (3,357 km/2,086 mi)<br />
* St. Louis, MO &#187; Chicago, IL: 31-Dec-2011<br />
* Chicago, IL &#187; St. Louis, MO: 5-Jul-2011<br />
* Milwaukee, WI &#187; Chicago, IL: 30-Jun-2011<br />
* Pittsburgh, PA &#187; New York City, NY: April 2005 (round-trip)<br />
* Pittsburgh, PA &#187; Bethlehem, PA &#187; Westborough, MA &#187; New York City, NY: December 2004 (round-trip)<br />
* Pittsburgh, PA &#187; Boston, MA: November 2004 (round-trip)<br />
* Corvallis, OR &#187; Salt Lake City, UT &#187; Houston, TX &#187; Atlanta, GA &#187; Pittsburgh, PA: September 2004<br />
* Corvallis, OR &#187; Boston, MA: 2001, 2002 (round-trip)<br />
* Corvallis, OR &#187; Vancouver, BC, Canada (round-trip)<br />
* Corvallis, OR &#187; Tijuana, Mexico: 7-Sep-1999 (round-trip)<br />
* Los Angeles, CA &#187; Corvallis, OR: January 1998<br />
* Houston, TX &#187; Milwaukee, WI &#187; Menominee, MI: May 1995 (round-trip)<br />
<br />
== Bus / Train / Ferry ==<br />
===Spain trip (2006)===<br />
* Monaco &#187; Cannes &#187; Marseille &#187; Montpellier St-Ro &#187; Barcelona; April 2006 (round-trip)<br />
** 24-Apr-06 18h35: |&rarr; Nice, France [SNCF train]<br />
** 24-Apr-06 19h00: Antibes, FR<br />
** 24-Apr-06 19h07: Cannes, FR<br />
** 24-Apr-06 19h30: B. sur-Mer, FR<br />
** 24-Apr-06 19h39: San Raphael-Valescure, FR<br />
** 24-Apr-06 20h14: Les Arcs-Drag., FR<br />
** 24-Apr-06 20h56: Toulon, FR<br />
** 24-Apr-06 21h35: Marseille, FR<br />
** 25-Apr-06 15h05: |&rarr; Marseille, FR<br />
** 25-Apr-06 16h16: Nîmes, FR<br />
** 25-Apr-06 17h21: Montpellier St-Ro, FR<br />
** 25-Apr-06 18h42: Béziers, FR<br />
** 25-Apr-06 19h35: Perpignan, FR<br />
** 25-Apr-06 20h15: Portbou, Spain (ES) [''border'']<br />
** 25-Apr-06 22h30: Barcelona, ES<br />
** 27-Apr-06 19h24: |&rarr; Barcelona, ES [Renfe train]<br />
** 27-Apr-06 22h05: Cerbere, FR [''border'']<br />
** 28-Apr-06 08h37: Nice, FR<br />
** 28-Apr-06 10h00: Monaco<br />
<br />
===Miscellaneous (Europe)===<br />
* Tallinn, Estonia &rarr; Helsinki, Finland: January 2020 (round-trip)<br />
* Lisbon, Portugal &rarr; Porto, Portugal: Nov-2016 (round-trip)<br />
* København, DK &#187; Berlin, D: 09-Apr-2006 [+Ferry]<br />
* Berlin, D &#187; København, DK: 08-Apr-2006 (15h15) [+Ferry]<br />
* Ljubljana, Slovenia &#187; Villach HBF, Austria: 18-Aug-1997<br />
* Stockholm C &#187; Oslo S: 15-Aug-1997 (SJ train)<br />
* Salzburg, Austria &#187; Ljubljana, Slovenia: 25-Aug-1997 (&#214;sterreichische Bundesbahnen train (&#214;BB))<br />
* Haslev, DK &#187; Næstved, DK: 24-Aug-1997 (DSB train)<br />
* København &#187; Stockholm C: 14-Aug-1997 (DSB train)<br />
* Oslo S &#187; Bergen: 16-Aug-1997<br />
* Næstved, DK &#187; Rødby Færge, DK: 24-Aug-1997<br />
* Salzburg HBF &#187; Villach HBF (&uuml;ber Schwarzach-St. veit Bad Gastein): 25-Aug-1997 (&#214;BB train)<br />
* Oslo S &#187; Trondheim: 18-Aug-1997<br />
* Grensen (Scandinavia): 16-Aug-1997<br />
* Abisko Turiststation - STF: 20-Aug-1997<br />
* Abisko Turiststation - STF: 21-Aug-1997<br />
* Germany: 24-Aug-1997 (DB train)<br />
* Stockholm S:T Eriksgatan: 15-Aug-1997<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Jun-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Mar-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: (28-Nov-1997/30-Nov-1997) (round-trip)<br />
* Budapest, Hungary &rarr; Ljubljana, Slovenia: 8-Nov-1996<br />
* Budapest, Hungary &rarr; Slovakia: 18-Aug-1995 (round-trip)<br />
* Budapest, Hungary &rarr; Vienna, Austria: 9-Feb-1995 (round-trip)<br />
* Moscow, Russia &rarr; Warsaw, Poland: Sep-1994<br />
* Moscow, Russia &rarr; Brest, Belarus: Aug-1994 (round-trip)<br />
* Moscow, Russia &rarr; Minsk, Belarus: Jul-1994 (round-trip)<br />
* Warsaw, Poland &#187; Moscow, Russia: Jun-1994<br />
* Warsaw, Poland &rarr; Vilnius, Lithuania &rarr; Riga, Latvia: (12-Jan-1994/??-Jan-1994) (round-trip)<br />
<br />
===Miscellaneous (South America)===<br />
* Arequipa, Peru &rarr; Lima, Peru: 1992<br />
* Arequipa, Peru &rarr; Iquique, Chile: (17-Jul-1992/20-Jul-1992) (round-trip)<br />
* Lima, Peru &rarr; Arequipa, Peru: 1992<br />
* Lima, Peru &rarr; La Paz, Bolivia: (19-May-1991/6-Jun-1991) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (29-Nov-1990/11-Dec-1990) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (6-Jul-1990/20-Jul-1990) (round-trip)<br />
<br />
==Flights==<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): February 2024 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2023 [RT]<br />
* Seattle, WA (SEA) ✈ New York City, NY (JFK): October 2023 [RT] {~5-6 hours x 2}<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX): March 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): February 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2022 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): August 2022 [RT]<br />
* Kyiv, Ukraine (KBP) ✈ Frankfurt, Germany (FRA) ✈ Seattle, WA (SEA): December 2021<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ Frankfurt, Germany (FRA) ✈ Kyiv, Ukraine (KBP): December 2021<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2021 [RT]<br />
* Memphis, TN (MEM) ✈ Atlanta, GA (ATL) ✈ Seattle, WA (SEA): June 2021<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC) ✈ Memphis, TN (MEM): June 2021<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): May 2021 [RT]<br />
* Tallinn, Estonia (TLL) ✈ Stockholm, Sweden (ARN) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): January 2020<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ København, DK (CPH) ✈ Helsinki, Finland (HEL) ✈ Tallinn, Estonia (TLL): December 2019<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): October 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Miami, FL (MIA): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): August 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Denver, CO (DEN): May 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Charlotte, NC (CLT): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Santa Ana, CA (SNA): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): September 2018 [RT]<br />
* Budapest, Hungary (BUD) ✈ Brussels, Belgium (BRU) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): July 2018<br />
* Seattle, WA (SEA) ✈ Toronto, Canada (YYZ) ✈ Budapest, Hungary (BUD): June 2018<br />
* Seattle, WA (SEA) ✈ Reno, NV (RNO): May 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Reykjavík, Iceland (RKV): December 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Kona, Hawaii (KOA): September 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC): August 2017 [RT]<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): November 2016<br />
* Lisbon, Portugal ✈ Amsterdam, NL (AMS): November 2016<br />
* Paris, FR (CGD) ✈ Lisbon, Portugal: November 2016<br />
* Seattle, WA (SEA) ✈ Paris, FR (CDG): November 2016<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): November 2016 [RT]<br />
* Seattle, WA (SEA) ✈ Las Vegas, NV (LAS): June 2016 [RT]<br />
* Houston, TX (IAH) ✈ Seattle, WA (SEA): September 2015 [RT]<br />
* Houston, TX (IAH) ✈ San Francisco, CA (SFO): August 2015 [RT]<br />
* Houston, TX (IAH) ✈ Madison, WI (MSN): March 2015 [RT]<br />
* Houston, TX (IAH) ✈ Amsterdam, NL (AMS): March 2015 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee (MKE): June 2011<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX) ✈ Chicago, IL (ORD): October 2010 [RT]<br />
* Seattle, WA (SEA) ✈ Los Angeles, CA (LAX): December 2007 [RT]<br />
* København, DK (CPH) ✈ Seattle, WA (SEA): June 2006<br />
* Heathrow, UK ✈ København, DK (CPH): June 2006<br />
* Nice, FR ✈ Heathrow, UK: June 2006<br />
* København, DK (CPH) ✈ Nice, FR (NCE): February 2006<br />
* Washington Dulles ✈ København, DK: August 2005<br />
* Pittsburgh, PA (PIT) ✈ Washington Dulles: August 2005<br />
* Portland, OR (PDX) ✈ Pittsburgh, PA (PIT): Summer 2004 [RT]<br />
* Eugene, OR ✈ Houston, TX (IAH): February 2002 [RT]<br />
* Portland, OR (PDX) ✈ Boston, MA: December 2002 [RT]<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): January 2000<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): January 2000<br />
* Dublin, Ireland ✈ Amsterdam, NL (AMS): January 2000<br />
* Amsterdam (AMS) ✈ Dublin, Ireland: December 1999<br />
* Seattle, WA (SEA) ✈ Amsterdam, NL (AMS): December 1999<br />
* Portland, OR (PDX) ✈ Seattle, WA (SEA): December 1999<br />
* Chicago (ORD) ✈ Los Angeles (LAX): December 1997<br />
* Greenbay, WI (GRB) ✈ Chicago (ORD): December 1997<br />
* Chicago (ORD) ✈ Greenbay, WI (GRB): December 1997<br />
* Rome, Italy (FCO) ✈ Chicago, IL (ORD): December 1997<br />
* Trieste, Italy (TRS) ✈ Rome, Italy (FCO): December 1997<br />
* Houston, TX (IAH) ✈ Budapest, Hungary (BUD): July 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: June 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: March 1996 [RT]<br />
* Narita, Japan ✈ Taipei, Taiwan: December 1995 [RT]<br />
* Los Angeles, CA (LAX) ✈ Narita, Japan: October 1995<br />
* Houston, TX (IAH) ✈ Los Angeles (LAX): October 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): September 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): May 1995 [RT]<br />
* Paris, FR (CGD) ✈ Vienna, Austria: September 1993<br />
* Quito, Ecuador ✈ Caracas, Venezuela (CCS) ✈ Paris, France: 1993<br />
* Lima, Peru ✈ Tumbes, Peru: December 1992<br />
* Boston, MA ✈ Miami, FL ✈ Lima, Peru: <br />
* Amsterdam, NL (AMS) ✈ Chicago, IL (ORD): <br />
* Boston, MA ✈ Amsterdam, NL (AMS):<br />
<br />
== Individual Places ==<br />
=== Ireland ===<br />
* Dublin<br />
** '''Dublin''' (Baile &Ntilde;tha Cliath)<br />
* Kildare<br />
** Naas<br />
* Laois<br />
* Carlow<br />
** Carlow (Ceatharlach)<br />
** Royal Oak<br />
* Kilkenny<br />
** '''Kilkenny''' (Cill Chainnigh)<br />
** Callan<br />
* Tipperary<br />
** Glenbower<br />
** Clonmel (Cluian Meala)<br />
** Cahir<br />
** Burncourt<br />
* Cork<br />
** Fermoy<br />
** '''Cork''' (Coroaigh)<br />
** Fota<br />
** Cobh (An C&oacute;bh)<br />
** '''Blarney'''<br />
** Macroom<br />
** Ballyvourney<br />
* Kerr<br />
** ''Derrynasaggart Mts''<br />
** Poulgorm Br<br />
** '''Killarney''' (Cill Airne)<br />
** Farranfore<br />
* Limerick<br />
** Abbeyfeale<br />
** ''Mullaghareirk Mts''<br />
** Newcastle West<br />
** Croagh<br />
** '''Limerick''' (Luimneach)<br />
* Clare<br />
** Bunratty<br />
** Ennis (Inis)<br />
** Ennistymon<br />
** Liscannor<br />
** ''Cliffs of Moher''<br />
** Doolin<br />
** Lisdoonvarna<br />
** Ballyvaughan<br />
** Bealaclugga<br />
** Burren<br />
* Galway<br />
** Kinvarra<br />
** Ballinderreen<br />
** Oranmore<br />
** '''Galway''' (Gaillimh)<br />
** Claregalway<br />
** Tuam<br />
* Mayo<br />
** Claremorris<br />
** Cloonfallagh<br />
** Charlestown<br />
* Sligo<br />
** Curry<br />
** Tubbercurry<br />
** Collooney<br />
** '''Sligo''' (Sligeach)<br />
** ''Dartry Mts''<br />
* Leitrim<br />
* Donegal<br />
** Bundoran<br />
** Ballyshannon<br />
** Donegal (D&uacute;n na nGall)<br />
** Ballybofey<br />
** Clady<br />
* Tyrone<br />
** '''Strabane''' (Northern Ireland)<br />
* Londonderry<br />
** Derry (Londonderry)<br />
** Eglinton<br />
** Ballykelly<br />
** Limavady<br />
** Coleraine<br />
* Antrim<br />
** Derrykelghan<br />
** Moss-side<br />
** Ballycastle<br />
** ''Antrim Hills''<br />
** Ballintoy<br />
** ''Carrick-a-Rede Rope Bridge''<br />
** ''Giants Causeway''<br />
** Craignamaddy<br />
** Ballymoney<br />
** Ballymena<br />
** Antrim<br />
** ''Lough Neagh'' (lake)<br />
** Dunadry<br />
** Newtownabbey<br />
** '''Belfast'''<br />
* Down<br />
** Lisburn<br />
** Banbridge<br />
* Armagh<br />
** Newry<br />
* Louth<br />
** Dundalk (Dun Dealgan)<br />
** Dunleen<br />
** Drogheda (Droichead Atha)<br />
* Meath<br />
** Julianstown<br />
* Dublin<br />
** Balbriggan<br />
** Swords<br />
<br />
[[Category:World Travels]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Secure_Shell&diff=8286Secure Shell2024-02-06T18:24:32Z<p>Christoph: /* Making SSH even more secure */</p>
<hr />
<div>'''Secure Shell''' (or '''SSH''') is a set of standards and an associated network protocol that allows establishing a secure channel between a local and a remote computer. It uses public-key cryptography to authenticate the remote computer and (optionally) to allow the remote computer to authenticate the user.<br />
<br />
''Note: This article will only consider OpenSSH.''<br />
<br />
== SSH without passwords ==<br />
* Step 1: Generate keys (public and private) and ''leave passphrase blank'' if you want password-less logins:<br />
ssh-keygen<br />
# ~OR~<br />
ssh-keygen -t dsa<br />
# ~OR~<br />
ssh-keygen -t rsa -b 4096 -f /home/bob/my-key<br />
<br />
* Step 2: Copy '''''public''''' key to remote server (Important: Only the ''public key''!):<br />
scp ~/.ssh/id_dsa.pub username@remote-host:.ssh/authorized_keys<br />
# ~OR~<br />
ssh-copy-id -i ~/.ssh/id_rsa.pub username@remote-host<br />
<br />
* Step 3: Set directory/file permissions (if not already set):<br />
chmod 0700 ~/.ssh<br />
chmod 0600 ~/.ssh/authorized_keys<br />
<br />
* Step 4: Now, SSH into your remote server (password will be required the first time):<br />
ssh username@remote-host<br />
<br />
That's it! You are now free to log into your remote server without entering a password. This is useful for automating file transfers. However, it ''must'' be used with care. If not executed properly, it is a potential security risk.<br />
<br />
==Using SSH private keys==<br />
<br />
For illustration purposes, I will generate a pseudo-key by generating 512 random characters to give you an idea of what a RSA private key should look like (note: You should never really create a private key less than 2048 bits):<br />
$ echo "-----BEGIN RSA PRIVATE KEY-----" && openssl rand -base64 512 && echo -e "-----END RSA PRIVATE KEY-----\n"<br />
-----BEGIN RSA PRIVATE KEY-----<br />
AgaLRL9vUvHb736UVEavYIgpDJywdAvy+Y8/PGnS2aXbr1JzRXsvmoufcYpdJev+<br />
9E2XigSgoEuP3eDH4lRCtYRVuSqN7jUVJT26KBQbC34qw72mrfcVoW5H442l2oGF<br />
oOcWTcRz0F4R0LKbCecx7tGgzAW/XOVocmcC4CsEIrA+hmUkk9sXO/VD7eV6dP5D<br />
d3k3bqoDI4VEkhpavKSTRnoDBrl33tiz43vyiQUegPjZVkg+jOI7fyZL2hElQea2<br />
o+KjEFfr4a1ZJs/58XitoCcHb7vaFX4PGNDuveBchFKmeWROuMxHalBVbV/sZVr4<br />
bJYfNHTHHr4rNjQdf5cO9wnzIhC1hsutxZWPEj9JF3X+BVtAgKVS9Zbkh9BxSJJG<br />
cLWrmyqM7gRhE96ibHF6hGJ7jj0cf/pK8e8NVIVzD1jwvXAT7FeJHkKltoAKQ7LQ<br />
bC4d0b27jOccLpR6C4SU6zhSyWBnsoawiMfYR7HsEmLlOZW6fycrukFzi5wm/zpK<br />
r4YVIrzWHJzJbP+CIVvLUp8hv13OO3ozQo3tCNofpESV2/vYOGStDQtF9GVq53rS<br />
DWn2NAzT6X1IFtJlxQxG0CNsnNBAAZoOA3lgEPQqPzdoqKA/deS64oBH8j8CUSUp<br />
DQgaIxzVF1/2bKO3JoHKLaeui4vFIH7KT8ITS/FKoD8=<br />
-----END RSA PRIVATE KEY-----<br />
<br />
* Save your private key to a file (let's call it "<code>my_private_key.txt</code>") and:<br />
$ chmod 600 my_private_key.txt<br />
<br />
* Now use that private key to log into your remote server (assuming, of course, that server has the matching key):<br />
$ ssh -i /path/to/my_private_key.txt -l root <SERVER_IP><br />
$ #~OR~<br />
$ ssh root@<SERVER_IP> -i /path/to/my_private_key.txt<br />
<br />
* Get the private key's "fingerprint":<br />
$ ssh-keygen -lf /path/to/my_private_key.txt<br />
2048 f6:a0:8c:99:ba:c2:31:36:1c:f2:5d:c5:da:37:27:b7 bob@hostname (RSA)<br />
<br />
* Create a "signature":<br />
$ echo -n 'this is my signature' |openssl sha1 -binary |\<br />
openssl pkeyutl -sign -inkey my_private_key.txt -pkeyopt digest:sha1 > signature<br />
<br />
==Converting and verifying OpenSSH public keys==<br />
<br />
* First, generate a public/private key:<br />
$ ssh-keygen -t rsa -b 2048 -f /home/bob/my-key<br />
<br />
* Extract public key from private key:<br />
$ openssl rsa -in my-key -pubout<br />
<br />
* Note the difference between the above and the default public key <code>`ssh-keygen`</code> provides (i.e., the "<code>my-key.pub</code>" file):<br />
$ cat /home/bob/my-key.pub<br />
<br />
* Or, get your public key in PEM format (only works with OpenSSH v5.6+):<br />
$ ssh-keygen -f my-key.pub -e -m pem<br />
<br />
* Check the integrity of your public key:<br />
$ sed -e 's/ssh-rsa //' ~/.ssh/id_rsa.pub|awk '{print substr($1,1,76)}'|openssl base64 -d|hexdump<br />
<br />
00000000 00 00 00 07 73 73 68 2d 72 73 61 00 00 00 03 01 |....ssh-rsa.....|<br />
00000010 00 01 00 00 01 01 00 a3 f3 03 a0 8b 08 df 93 ac |................|<br />
00000020 34 19 6c 19 1b 1a b5 b7 bf 43 0e 41 2f be 33 9a |4.l......C.A/.3.|<br />
00000030 3f 15 c0 91 8c 27 09 ba c5 |?....'...|<br />
00000039<br />
<br />
The above reads as such:<br />
<br />
00 00 00 07 The length in bytes of the next field<br />
73 73 68 2d 72 73 61 The key type (ASCII encoding of "ssh-rsa")<br />
00 00 00 03 The length in bytes of the public exponent<br />
01 00 01 The public exponent (usually 65537, as here)<br />
00 00 01 01 The length in bytes of the modulus (here, 257)<br />
00 a3 f3... The modulus<br />
<br />
So the key has type RSA, and its modulus has length 257 bytes, except that the first byte has value "00", so the real length is 256 bytes (that first byte was added so that the value is considered positive, because the internal encoding rules call for signed integers, the first bit defining the sign). 256 bytes is 2048 bits.<br />
<br />
==SSH config file==<br />
''Note: See the [http://linux.die.net/man/5/ssh_config ssh_config (5) man page] for details.''<br />
<br />
*Edit your SSH config file (<code>~/.ssh/config</code>) and add the following (example) lines:<br />
# contents of $HOME/.ssh/config<br />
Host dev<br />
HostName dev.example.com<br />
Port 22321<br />
User bob<br />
<br />
Host github<br />
IdentityFile ~/.ssh/github.key<br />
<br />
Now you can simply type:<br />
ssh dev<br />
to SSH into that <code>dev.example.com</code> remote host.<br />
<br />
See: [http://nerderati.com/2011/03/simplify-your-life-with-an-ssh-config-file/ for more examples].<br />
<br />
==Making SSH even more secure==<br />
Note: All of the following settings will be implemented in your <code>/etc/ssh/sshd_config</code> file.<br />
*Disable SSH protocol 1. Make sure no lines reads <code>Protocol 1</code>. If so, change it to:<br />
Protocol 2<br />
*Enable key-based logins (see above for how to do this):<br />
PubkeyAuthentication yes<br />
AuthorizedKeysFile .ssh/authorized_keys<br />
*Disable password-based logins (Only do this if you ''first'' enable key-based logins!):<br />
PasswordAuthentication no<br />
*Run on ports other than 22<br />
Port 1717 # any free port above 1024<br />
You will then need to point to this port when SSHing into your remote machine<br />
ssh -p 1717 remote.machine<br />
*Disable root logins (Very important!):<br />
PermitRootLogin no<br />
<br />
===Disable / deny brute force attacks===<br />
The following [[iptables]] rules should deny almost all brute force attacks on your firewall's port 22 (SSH port):<br />
iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH<br />
iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 8 --rttl --name SSH -j DROP<br />
<br />
===Miscellaneous===<br />
<br />
User rights must follow the principle of least privilege. The restriction can be applied to a number of parameters: accessible commands or originating IP addresses. For example, in the authorized_keys file, you can authorize only connections from a specific network (for a given key):<br />
from="192.168.1.*" ssh-ed25519 AAAA...<br />
So, for this key, only connections from the <code>192.168.1.0/24</code> network will be accepted.<br />
<br />
==Supported escape sequences==<br />
''Note: The following escapes are only recognized immediately after newline.''<br />
~. - terminate connection (and any multiplexed sessions)<br />
~B - send a BREAK to the remote system<br />
~C - open a command line<br />
~R - Request rekey (SSH protocol 2 only)<br />
~^Z - suspend ssh<br />
~# - list forwarded connections<br />
~& - background ssh (when waiting for connections to terminate)<br />
~? - this message<br />
~~ - send the escape character by typing it twice<br />
<br />
==Miscellaneous==<br />
<br />
* Create a custom SSH prompt:<br />
$ echo 'Defaults passprompt="LAUNCH CODE: "' | sudo tee -a /etc/sudoers.d/launch_code<br />
<br />
; Proxy Jump<br />
<br />
* SSH into a backend host (using its private IP) via a bastion host:<br />
$ ssh -i ~/.ssh/backend.pem ubuntu@<backend-private-ip> \<br />
-o ProxyCommand="ssh -i ~/.ssh/bastion.pem -o ForwardAgent=yes -W %h:%p ubuntu@<bastion-public-ip>"<br />
#~OR~<br />
$ ssh -i ~/.ssh/backend.pem ubuntu@<backend-private-ip> \<br />
-o ProxyCommand="ssh -i ~/.ssh/bastion.pem -o ForwardAgent=yes ubuntu@<bastion-public-ip> 'nc %h %p'"<br />
<br />
Another way to do the above is to use the proxy jump (<code>-J</code>) option:<br />
$ ssh-add ~/.ssh/backend.pem<br />
$ ssh-add ~/.ssh/bastion.pem<br />
$ ssh-add -L<br />
$ ssh -v -J ubuntu@<bastion-public-ip> ubuntu@<backend-private-ip><br />
<br />
Here we tell SSH to connect to the target host by first making a ssh connection to the jump host (aka "bastion") described by destination and then establishing a TCP forwarding to the ultimate destination from there. Multiple jump hops may be specified, separated by comma characters. This is a shortcut to specify a <code>ProxyJump</code> configuration directive.<br />
<br />
Finally, you can add something like the following to your <code>~/.ssh/config</code> file:<br />
<pre><br />
Host bastion<br />
HostName <bastion-public-ip><br />
User ubuntu<br />
IdentityFile ~/.ssh/bastion.pem<br />
IdentitiesOnly yes<br />
ProxyCommand none<br />
TCPKeepAlive yes<br />
ServerAliveInterval 5<br />
<br />
Host backend<br />
User ubuntu<br />
HostName <backend-private-ip><br />
IdentityFile ~/.ssh/backend.pem<br />
ProxyCommand ssh bastion nc %h %p<br />
</pre><br />
<br />
With the above, you can now SSH "directly" to the backend (really proxying via the bastion host) with:<br />
$ ssh backend<br />
<br />
* Run a local [[Bash]] script on a list of remote hosts:<br />
<pre><br />
$ for i in $(seq 102 104); do<br />
ssh root@128.14.163.${i} "bash -s" < ./install_docker.sh;<br />
done<br />
</pre><br />
<br />
==Todo==<br />
*Access your local subversion repository from the road<br />
ssh -NfL 3690:127.0.0.1:3690 USER@64.3.10.24 -p6111<br />
Then you can access the repository via<br />
<nowiki>svn://127.0.0.1/YOUR-SVN-PATH</nowiki><br />
<br />
*Secure web traffic when traveling<br />
ssh -D 9999 -p6111 USER@64.3.10.24<br />
then go to Firefox's Preferences->Advanced->Network->Settings->Manual proxy settings with:<br />
SOCKS Host: 127.0.0.1 Port: 9999<br />
No proxy for: localhost, 127.0.0.1<br />
<br />
==See also==<br />
*[[SSH Filesystem]] (sshfs)<br />
*[[Fish protocol]]<br />
*[[Rsync (command)|rsync]]<br />
<br />
==External links==<br />
*[http://corneliusroot.blogspot.com/2006/12/copying-mass-amounts-of-data-over.html Copying mass amounts of data over a network with bash, rsync, and ssh]<br />
*[http://kimmo.suominen.com/docs/ssh/ Getting started with SSH]<br />
*[http://protempore.net/~calvins/howto/ssh-connection-sharing/ Improving SSH (OpenSSH) connection speed with shared connections]<br />
*[[wikipedia:Secure_Shell]]<br />
<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Kubernetes&diff=8285Kubernetes2024-01-19T17:26:34Z<p>Christoph: /* Release history */</p>
<hr />
<div>'''Kubernetes''' (also known by its numeronym '''k8s''') is an open source container cluster manager. Kubernetes' primary goal is to provide a platform for automating deployment, scaling, and operations of application containers across a cluster of hosts. Kubernetes was released by Google on July 2015.<br />
<br />
* Get the latest stable release of k8s with:<br />
$ curl -sSL <nowiki>https://dl.k8s.io/release/stable.txt</nowiki><br />
<br />
==Release history==<br />
<br />
'''NOTE:''' I have been using Kubernetes since release 1.0 back in September 2015.<br />
<br />
NOTE: There is no such thing as Kubernetes Long-Term-Support (LTS). There is a new "minor" release ''roughly'' every 3 months (note: changed to ''roughly'' every 4 months in 2020).<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="3" bgcolor="#EFEFEF" | '''Kubernetes release history'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Release<br />
!Date<br />
!Cadence (days)<br />
|- align="left"<br />
|1.0 || 2015-07-10 ||align="right"|<br />
|--bgcolor="#eeeeee"<br />
|1.1 || 2015-11-09 ||align="right"| 122<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.2.md 1.2] || 2016-03-16 ||align="right"| 128<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.3.md 1.3] || 2016-07-01 ||align="right"| 107<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.4.md 1.4] || 2016-09-26 ||align="right"| 87<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.5.md 1.5] || 2016-12-12 ||align="right"| 77<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.6.md 1.6] || 2017-03-28 ||align="right"| 106<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.7.md 1.7] || 2017-06-30 ||align="right"| 94<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.8.md 1.8] || 2017-09-28 ||align="right"| 90<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.9.md 1.9] || 2017-12-15 ||align="right"| 78<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md 1.10] || 2018-03-26 ||align="right"| 101<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md 1.11] || 2018-06-27 ||align="right"| 93<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.12.md 1.12] || 2018-09-27 ||align="right"| 92<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.13.md 1.13] || 2018-12-03 ||align="right"| 67<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.14.md 1.14] || 2019-03-25 ||align="right"| 112<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md 1.15] || 2019-06-17 ||align="right"| 84<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md 1.16] || 2019-09-18 ||align="right"| 93<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md 1.17] || 2019-12-09 ||align="right"| 82<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md 1.18] || 2020-03-25 ||align="right"| 107<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md 1.19] || 2020-08-26 ||align="right"| 154<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md 1.20] || 2020-12-08 ||align="right"| 104<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md 1.21] || 2021-04-08 ||align="right"| 121<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md 1.22] || 2021-08-04 ||align="right"| 118<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md 1.23] || 2021-12-07 ||align="right"| 125<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md 1.24] || 2022-05-03 ||align="right"| 147<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md 1.25] || 2022-08-23 ||align="right"| 112<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md 1.26] || 2023-01-18 ||align="right"| 148<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md 1.27] || 2023-04-11 ||align="right"| 83<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md 1.28] || 2023-08-15 ||align="right"| 126<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md 1.29] || 2023-12-13 ||align="right"| 120<br />
|}<br />
</div><br />
<br clear="all"/><br />
See: [https://gravitational.com/blog/kubernetes-release-cycle The full-time job of keeping up with Kubernetes]<br />
<br />
==Providers and installers==<br />
<br />
* Vanilla Kubernetes<br />
* AWS:<br />
** Managed: EKS<br />
** Kops<br />
** Kube-AWS<br />
** Kismatic<br />
** Kubicorn<br />
** Stack Point Cloud<br />
* Google:<br />
** Managed: GKE<br />
** [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
** Stack Point Cloud<br />
** Typhoon<br />
* Azure AKS<br />
* Ubuntu UKS<br />
* VMware PKS<br />
* [[Rancher|Rancher RKE]]<br />
* CoreOS Tectonic<br />
<br />
==Design overview==<br />
Kubernetes is built through the definition of a set of components (building blocks or "primitives") which, when used collectively, provide a method for the deployment, maintenance, and scalability of container-based application clusters.<br />
<br />
These "primitives" are designed to be ''loosely coupled'' (i.e., where little to no knowledge of the other component definitions is needed to use) as well as easily extensible through an API. Both the internal components of Kubernetes as well as the extensions and containers make use of this API.<br />
<br />
==Components==<br />
The building blocks of Kubernetes are the following (note that these are also referred to as Kubernetes "Objects" or "API Primitives"):<br />
<br />
;Cluster : A cluster is a set of machines (physical or virtual) on which your applications are managed and run. All machines are managed as a cluster (or set of clusters, depending on the topology used).<br />
;Nodes (minions) : You can think of these as "container clients". These are the individual hosts (physical or virtual) that Docker is installed on and hosts the various containers within your managed cluster.<br />
: Each node will run etcd (a key-pair management and communication service, used by Kubernetes for exchanging messages and reporting on cluster status) as well as the Kubernetes Proxy.<br />
;Pods : A pod consists of one or more containers. Those containers are guaranteed (by the cluster controller) to be located on the same host machine (aka "co-located") in order to facilitate sharing of resources. For an example, it makes sense to have database processes and data containers as close as possible. In fact, they really should be in the same pod.<br />
: Pods "work together", as in a multi-tiered application configuration. Each set of pods that define and implement a service (e.g., MySQL or Apache) are defined by the label selector (see below).<br />
: Pods are assigned unique IPs within each cluster. These allow an application to use ports without having to worry about conflicting port utilization.<br />
: Pods can contain definitions of disk volumes or shares, and then provide access from those to all the members (containers) within the pod.<br />
: Finally, pod management is done through the API or delegated to a controller.<br />
;Labels : Clients can attach key-value pairs to any object in the system (e.g., Pods or Nodes). These become the labels that identify them in the configuration and management of them. The key-value pairs can be used to filter, organize, and perform mass operations on a set of resources.<br />
;Selectors : Label Selectors represent queries that are made against those labels. They resolve to the corresponding matching objects. A Selector expression matches labels to filter certain resources. For example, you may want to search for all pods that belong to a certain service, or find all containers that have a specific tier Label value as "database". Labels and Selectors are inherently two sides of the same coin. You can use Labels to classify resources and use Selectors to find them and use them for certain actions.<br />
: These two items are the primary way that grouping is done in Kubernetes and determine which components that a given operation applies to when indicated.<br />
;Controllers : These are used in the management of your cluster. Controllers are the mechanism by which your desired configuration state is enforced.<br />
: Controllers manage a set of pods and, depending on the desired configuration state, may engage other controllers to handle replication and scaling (Replication Controller) of X number of containers and pods across the cluster. It is also responsible for replacing any container in a pod that fails (based on the desired state of the cluster).<br />
: Replication Controllers (RC) are a subset of Controllers and are an abstraction used to manage pod lifecycles. One of the key uses of RCs is to maintain a certain number of running Pods (e.g., for scaling or ensuring that at least one Pod is running at all times, etc.). It is considered a "best practice" to use RCs to define Pod lifecycles, rather than creating Pods directly.<br />
: Other controllers that can be engaged include a ''DaemonSet Controller'' (enforces a 1-to-1 ratio of pods to Worker Nodes) and a ''Job Controller'' (that runs pods to "completion", such as in batch jobs).<br />
: Each set of pods any controller manages, is determined by the label selectors that are part of its definition.<br />
;Replica Sets: These define how many replicas of each Pod will be running. They also monitor and ensure the required number of Pods are running, replacing Pods that die. Replica Sets can act as replacements for Replication Controllers.<br />
;Services : A Service is an abstraction on top of Pods, which provides a single IP address and DNS name by which the Pods can be accessed. This load balancing configuration is much easier to manage and helps scale Pods seamlessly.<br />
: Kubernetes can then provide service discovery and handle routing with the static IP for each pod as well as load balancing (round-robin based) connections to that service among the pods that match the label selector indicated.<br />
: By default, although a service is only exposed inside a cluster, it can also be exposed outside a cluster, as needed.<br />
;Volumes : A Volume is a directory with data, which is accessible to a container. The volume co-terminates with the Pods that encloses it.<br />
;Name : A name by which a resource is identified.<br />
;Namespace : A Namespace provides additional qualification to a resource name. This is especially helpful when multiple teams/projects are using the same cluster and there is a potential for name collision. You can think of a Namespace as a virtual wall between multiple clusters.<br />
;Annotations : An Annotation is a Label, but with much larger data capacity. Typically, this data is not readable by humans and is not easy to filter through. Annotation is useful only for storing data that may not be searched, but is required by the resource (e.g., storing strong keys, etc.).<br />
;Control Pane<br />
;API<br />
<br />
===Pods===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/ Pod]'' is the smallest and simplest Kubernetes object. It is the unit of deployment in Kubernetes, which represents a single instance of the application. A Pod is a logical collection of one or more containers, which:<br />
<br />
* are scheduled together on the same host;<br />
* share the same network namespace; and<br />
* mount the same external storage (Volumes).<br />
<br />
Pods are ephemeral in nature, and they do not have the capability to self-heal by themselves. That is why we use them with controllers, which can handle a Pod's replication, fault tolerance, self-heal, etc. Examples of controllers are ''Deployments'', ''ReplicaSets'', ''ReplicationControllers'', etc. We attach the Pod's specification to other objects using Pod Templates (see below).<br />
<br />
===Labels===<br />
Labels are key-value pairs that can be attached to any Kubernetes object (e.g. ''Pods''). Labels are used to organize and select a subset of objects, based on the requirements in place. Many objects can have the same label(s). Labels do not provide uniqueness to objects. <br />
<br />
===Label Selectors===<br />
With Label Selectors, we can select a subset of objects. Kubernetes supports two types of Selectors:<br />
<br />
;Equality-Based Selectors : Equality-Based Selectors allow filtering of objects based on label keys and values. With this type of Selector, we can use the <code>=</code>, <code>==</code>, or <code>!=</code> operators. For example, with <code>env==dev</code>, we are selecting the objects where the "<code>env</code>" label is set to "<code>dev</code>".<br />
;Set-Based Selectors : Set-Based Selectors allow filtering of objects based on a set of values. With this type of Selector, we can use the <code>in</code>, <code>notin</code>, and <code>exist</code> operators. For example, with <code>env in (dev,qa)</code>, we are selecting objects where the "<code>env</code>" label is set to "<code>dev</code>" or "<code>qa</code>".<br />
<br />
===Replication Controllers===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/ ReplicationController]'' (rc) is a controller that is part of the Master Node's Controller Manager. It makes sure the specified number of replicas for a Pod is running at any given point in time. If there are more Pods than the desired count, the ReplicationController would kill the extra Pods, and, if there are less Pods, then the ReplicationController would create more Pods to match the desired count. Generally, we do not deploy a Pod independently, as it would not be able to re-start itself if something goes wrong. We always use controllers like ReplicationController to create and manage Pods.<br />
<br />
===Replica Sets===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ ReplicaSet]'' (rs) is the next-generation ReplicationController. ReplicaSets support both equality- and set-based Selectors, whereas ReplicationControllers only support equality-based Selectors. As of January 2018, this is the only difference.<br />
<br />
As an example, say you create a ReplicaSet where you defined a "desired replicas = 3" (and set "<code>current==desired</code>"), any time "<code>current!=desired</code>" (i.e., one of the Pods dies) the ReplicaSet will detect that the current state is no longer matching the desired state. So, in our given scenario, the ReplicaSet will create one more Pod, thus ensuring that the current state matches the desired state.<br />
<br />
ReplicaSets can be used independently, but they are mostly used by Deployments to orchestrate the Pod creation, deletion, and updates. A Deployment automatically creates the ReplicaSets, and we do not have to worry about managing them.<br />
<br />
===Deployments===<br />
''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' objects provide declarative updates to Pods and ReplicaSets. The DeploymentController is part of the Master Node's Controller Manager, and it makes sure that the current state always matches the desired state.<br />
<br />
As an example, let's say we have a Deployment which creates a "ReplicaSet A". ReplicaSet A then creates 3 Pods. In each Pod, one of the containers uses the <code>nginx:1.7.9</code> image.<br />
<br />
Now, in the Deployment, we change the Pod's template and we update the image for the Nginx container from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>. As we have modified the Pod's template, a new "ReplicaSet B" gets created. This process is referred to as a "Deployment rollout". (A rollout is only triggered when we update the Pod's template for a deployment. Operations like scaling the deployment do not trigger the deployment.) Once ReplicaSet B is ready, the Deployment starts pointing to it.<br />
<br />
On top of ReplicaSets, Deployments provide features like Deployment recording, with which, if something goes wrong, we can rollback to a previously known state.<br />
<br />
===Namespaces===<br />
If we have numerous users whom we would like to organize into teams/projects, we can partition the Kubernetes cluster into sub-clusters using ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces]''. The names of the resources/objects created inside a Namespace are unique, but not across Namespaces.<br />
<br />
To list all the Namespaces, we can run the following command:<br />
$ kubectl get namespaces<br />
NAME STATUS AGE<br />
default Active 2h<br />
kube-public Active 2h<br />
kube-system Active 2h<br />
<br />
Generally, Kubernetes creates two default namespaces: <code>kube-system</code> and <code>default</code>. The <code>kube-system</code> namespace contains the objects created by the Kubernetes system. The <code>default</code> namespace contains the objects which belong to any other namespace. By default, we connect to the <code>default</code> Namespace. <code>kube-public</code> is a special namespace, which is readable by all users and used for special purposes, like bootstrapping a cluster. <br />
<br />
Using ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ Resource Quotas]'', we can divide the cluster resources within Namespaces.<br />
<br />
===Component services===<br />
The component services running on a standard master/worker node(s) Kubernetes setup are as follows:<br />
* Kubernetes Master node(s)<br />
*; kube-apiserver : Exposes Kubernetes APIs<br />
*; kube-controller-manager : Runs controllers to handle nodes, endpoints, etc.<br />
*; kube-scheduler : Watches for new pods and assigns them nodes<br />
*; etcd : Distributed key-value store<br />
*; DNS : [optional] DNS for Kubernetes services<br />
* Worker node(s)<br />
*; kubelet : Manages pods on a node, volumes, secrets, creating new containers, health checks, etc.<br />
*; kube-proxy : Maintains network rules, port forwarding, etc.<br />
<br />
==Setup a Kubernetes cluster==<br />
<br />
<div style="margin: 10px; padding: 5px; border: 2px solid red;">'''IMPORTANT''': The following is how to setup Kubernetes 1.2 that is, as of January 2018, a very old version. I will update this article with how to setup k8s using a much newer version (v1.9) when I have time.<br />
</div><br />
<br />
In this section, I will show you how to setup a Kubernetes cluster with etcd and Docker. The cluster will consist of 1 master node and 3 worker nodes.<br />
<br />
===Setup VMs===<br />
<br />
For this demo, I will be creating 4 VMs via [[Vagrant]] (with VirtualBox).<br />
<br />
* Create Vagrant demo environment:<br />
$ mkdir $HOME/dev/kubernetes && cd $_<br />
<br />
* Create Vagrantfile with the following contents:<br />
<pre><br />
# -*- mode: ruby -*-<br />
# vi: set ft=ruby :<br />
<br />
require 'yaml'<br />
VAGRANTFILE_API_VERSION = "2"<br />
<br />
$common_script = <<COMMON_SCRIPT<br />
# Set verbose<br />
set -v<br />
# Set exit on error<br />
set -e<br />
echo -e "$(date) [INFO] Starting modified Vagrant..."<br />
sudo yum update -y<br />
# Timestamp provision<br />
date > /etc/vagrant_provisioned_at<br />
COMMON_SCRIPT<br />
<br />
unless defined? CONFIG<br />
configuration_file = File.join(File.dirname(__FILE__), 'vagrant_config.yml')<br />
CONFIG = YAML.load(File.open(configuration_file, File::RDONLY).read)<br />
end<br />
<br />
CONFIG['box'] = {} unless CONFIG.key?('box')<br />
<br />
def modifyvm_network(node)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--nicpromisc1", "allow-all"]<br />
#vbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]<br />
vbox.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]<br />
end<br />
end<br />
<br />
def modifyvm_resources(node, memory, cpus)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--memory", memory]<br />
vbox.customize ["modifyvm", :id, "--cpus", cpus]<br />
end<br />
end<br />
<br />
## START: Actual Vagrant process<br />
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|<br />
<br />
config.vm.box = CONFIG['box']['name']<br />
<br />
# Uncomment the following line if you wish to be able to pass files from<br />
# your local filesystem directly into the vagrant VM:<br />
#config.vm.synced_folder "data", "/vagrant"<br />
<br />
## VM: k8s master #############################################################<br />
config.vm.define "master" do |node|<br />
node.vm.hostname = "k8s.master.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
#node.vm.network "forwarded_port", guest: 80, host: 8080<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['master']<br />
<br />
# Uncomment the following if you wish to define CPU/memory:<br />
#node.vm.provider "virtualbox" do |vbox|<br />
# vbox.customize ["modifyvm", :id, "--memory", "4096"]<br />
# vbox.customize ["modifyvm", :id, "--cpus", "2"]<br />
#end<br />
#modifyvm_resources(node, "4096", "2")<br />
end<br />
## VM: k8s minion1 ############################################################<br />
config.vm.define "minion1" do |node|<br />
node.vm.hostname = "k8s.minion1.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion1']<br />
end<br />
## VM: k8s minion2 ############################################################<br />
config.vm.define "minion2" do |node|<br />
node.vm.hostname = "k8s.minion2.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion2']<br />
end<br />
## VM: k8s minion3 ############################################################<br />
config.vm.define "minion3" do |node|<br />
node.vm.hostname = "k8s.minion3.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion3']<br />
end<br />
###############################################################################<br />
<br />
end<br />
</pre><br />
<br />
The above Vagrantfile uses the following configuration file:<br />
$ cat vagrant_config.yml<br />
<pre><br />
---<br />
box:<br />
name: centos/7<br />
storage_controller: 'SATA Controller'<br />
debug: false<br />
development: false<br />
network:<br />
dns1: 8.8.8.8<br />
dns2: 8.8.4.4<br />
internal:<br />
network: 192.168.200.0/24<br />
external:<br />
start: 192.168.100.100<br />
end: 192.168.100.200<br />
network: 192.168.100.0/24<br />
bridge: wlan0<br />
netmask: 255.255.255.0<br />
broadcast: 192.168.100.255<br />
host_groups:<br />
master: 192.168.200.100<br />
minion1: 192.168.200.101<br />
minion2: 192.168.200.102<br />
minion3: 192.168.200.103<br />
</pre><br />
<br />
* In the Vagrant Kubernetes directory (i.e., <code>$HOME/dev/kubernetes</code>), run the following command:<br />
$ vagrant up<br />
<br />
===Setup hosts===<br />
''Note: Run the following commands/steps on all hosts (master and minions).''<br />
<br />
* Log into the k8s master host:<br />
$ vagrant ssh master<br />
<br />
* Kubernetes cluster<br />
$ cat << EOF >> /etc/hosts<br />
192.168.200.100 k8s.master.dev<br />
192.168.200.101 k8s.minion1.dev<br />
192.168.200.102 k8s.minion2.dev<br />
192.168.200.103 k8s.minion3.dev<br />
EOF<br />
<br />
* Install, enable, and start NTP:<br />
$ yum install -y ntp<br />
$ systemctl enable ntpd && systemctl start ntpd<br />
$ timedatectl<br />
<br />
* Disable any [[iptables|firewall rules]] (for now; we will add the rules back later):<br />
$ systemctl stop firewalld && systemctl disable firewalld<br />
$ systemctl stop iptables<br />
<br />
* Disable [[SELinux]] (for now; we will turn it on again later):<br />
$ setenforce 0<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/sysconfig/selinux<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
$ sestatus<br />
<br />
* Add the Docker repo and update yum:<br />
$ cat << EOF > /etc/yum.repos.d/virt7-docker-common-release.repo<br />
[virt7-docker-common-release]<br />
name=virr7-docker-common-release<br />
baseurl=<nowiki>http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/</nowiki><br />
gpgcheck=0<br />
EOF<br />
$ yum update<br />
<br />
* Install Docker, Kubernetes, and etcd:<br />
$ yum install -y --enablerepo=virt7-docker-common-release kubernetes docker etcd<br />
<br />
===Install and configure master controller===<br />
''Note: Run the following commands on only the master host.''<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/etcd/etcd.conf</code> and add (or make changes to) the following lines:<br />
[member]<br />
ETCD_LISTEN_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
[cluster]<br />
ETCD_ADVERTISE_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/apiserver</code> and add (or make changes to) the following lines:<br />
<pre><br />
# The address on the local server to listen to.<br />
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"<br />
KUBE_API_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port on the local server to listen on.<br />
KUBE_API_PORT="--port=8080"<br />
<br />
# Port minions listen on<br />
KUBELET_PORT="--kubelet-port=10250"<br />
<br />
# Comma separated list of nodes in the etcd cluster<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://127.0.0.1:2379</nowiki>"<br />
<br />
# Address range to use for services<br />
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"<br />
<br />
# default admission control policies<br />
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"<br />
<br />
# Add your own!<br />
KUBE_API_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following etcd and Kubernetes services:<br />
<br />
$ for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE <br />
done<br />
<br />
* Check on the status of the above services (the following command should report 4 running services):<br />
$ systemctl status etcd kube-apiserver kube-controller-manager kube-scheduler | grep "(running)" | wc -l # => 4<br />
<br />
* Check on the status of the Kubernetes API server:<br />
$ kubectl cluster-info<br />
Kubernetes master is running at <nowiki>http://localhost:8080</nowiki><br />
$ curl <nowiki>http://localhost:8080/version</nowiki><br />
#~OR~<br />
$ curl <nowiki>http://k8s.master.dev:8080/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "2",<br />
"gitVersion": "v1.2.0",<br />
"gitCommit": "ec7364b6e3b155e78086018aa644057edbe196e5",<br />
"gitTreeState": "clean"<br />
}<br />
</pre><br />
<br />
* Get a list of Kubernetes API paths:<br />
$ curl <nowiki>http://k8s.master.dev:8080/paths</nowiki><br />
<pre><br />
{<br />
"paths": [<br />
"/api",<br />
"/api/v1",<br />
"/apis",<br />
"/apis/autoscaling",<br />
"/apis/autoscaling/v1",<br />
"/apis/batch",<br />
"/apis/batch/v1",<br />
"/apis/extensions",<br />
"/apis/extensions/v1beta1",<br />
"/healthz",<br />
"/healthz/ping",<br />
"/logs/",<br />
"/metrics",<br />
"/resetMetrics",<br />
"/swagger-ui/",<br />
"/swaggerapi/",<br />
"/ui/",<br />
"/version"<br />
]<br />
}<br />
</pre><br />
<br />
* List all available paths (key-value stores) known to ectd:<br />
$ etcdctl ls / --recursive<br />
<br />
The master controller in a Kubernetes cluster must have the following services running to function as the master host in the cluster:<br />
* ntpd<br />
* etcd<br />
* kube-controller-manager<br />
* kube-apiserver<br />
* kube-scheduler<br />
<br />
Note: The Docker daemon should not be running on the master host.<br />
<br />
===Install and configure the minions===<br />
''Note: Run the following commands/steps on all minion hosts.''<br />
<br />
* Log into the k8s minion hosts:<br />
$ vagrant ssh minion1 # do the same for minion2 and minion3<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ECTD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/kubelet</code> and add (or make changes to) the following lines:<br />
<pre><br />
###<br />
# kubernetes kubelet (minion) config<br />
<br />
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)<br />
KUBELET_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port for the info server to serve on<br />
KUBELET_PORT="--port=10250"<br />
<br />
# You may leave this blank to use the actual hostname<br />
KUBELET_HOSTNAME="--hostname-override=k8s.minion1.dev" # ***CHANGE TO CORRECT MINION HOSTNAME***<br />
<br />
# location of the api-server<br />
KUBELET_API_SERVER="--api-servers=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
<br />
# pod infrastructure container<br />
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"<br />
<br />
# Add your own!<br />
KUBELET_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following services:<br />
$ for SERVICE in kube-proxy kubelet docker; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE<br />
done<br />
<br />
* Test that Docker is running and can start containers:<br />
$ docker info<br />
$ docker pull hello-world<br />
$ docker run hello-world<br />
<br />
Each minion in a Kubernetes cluster must have the following services running to function as a member of the cluster (i.e., a "Ready" node):<br />
* ntpd<br />
* kubelet<br />
* kube-proxy<br />
* docker<br />
<br />
===Kubectl: Exploring our environment===<br />
''Note: Run all of the following commands on the master host.''<br />
<br />
* Get a list of nodes with <code>kubectl</code>:<br />
$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 20m<br />
k8s.minion2.dev Ready 12m<br />
k8s.minion3.dev Ready 12m<br />
</pre><br />
<br />
* Describe nodes with <code>kubectl</code>:<br />
<br />
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'<br />
$ kubectl get nodes -o jsonpath='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' | tr ';' "\n"<br />
<pre><br />
k8s.minion1.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion2.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion3.dev:OutOfDisk=False<br />
Ready=True<br />
</pre><br />
<br />
* Get the man page for <code>kubectl</code>:<br />
$ man kubectl-get<br />
<br />
==Working with our Kubernetes cluster==<br />
<br />
''Note: The following section will be working from within the Kubernetes cluster we created above.''<br />
<br />
===Create and deploy pod definitions===<br />
<br />
* Turn off nodes 1 and 2:<br />
minion{1,2}$ systemctl stop kubelet kube-proxy<br />
<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 1h<br />
k8s.minion2.dev NotReady 37m<br />
k8s.minion3.dev NotReady 39m<br />
</pre><br />
<br />
* Check for any k8s Pods (there should be none):<br />
master$ kubectl get pods<br />
<br />
* Create a builds directory for our Pods:<br />
master$ mkdir builds && cd $_<br />
<br />
* Create a Pod running Nginx inside a Docker container:<br />
<pre><br />
master$ kubectl create -f - <<EOF<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
* Check on Pod creation status:<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 0/1 ContainerCreating 0 2s<br />
</pre><br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 1/1 Running 0 3m<br />
</pre><br />
<br />
minion1$ docker ps<br />
<pre><br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
a718c6c0355d nginx:1.7.9 "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx.4580025_nginx_default_699e...<br />
</pre><br />
<br />
master$ kubectl describe pod nginx<br />
<br />
master$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1<br />
busybox$ wget -qO- 172.17.0.2<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete pod nginx<br />
<br />
* Port forwarding:<br />
master$ kubectl create -f nginx.yml # see above for YAML<br />
master$ kubectl port-forward nginx :80 &<br />
I1020 23:12:29.478742 23394 portforward.go:213] Forwarding from [::1]:40065 -> 80<br />
master$ curl -I localhost:40065<br />
<br />
===Tags, labels, and selectors===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-pod-label.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-pod-label.yml<br />
master$ kubectl get pods -l app=nginx<br />
master$ kubectl describe pods -l app=nginx<br />
<br />
* Add labels or overwrite existing ones:<br />
master$ kubectl label pods nginx new-label=mynginx<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=nginx<br />
master$ kubectl label pods nginx new-label=foo<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=foo<br />
<br />
===Deployments===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-prod.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-prod<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-prod<br />
spec:<br />
containers:<br />
- name: nginx-deployment-prod<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create --validate -f nginx-deployment-dev.yml<br />
master$ kubectl create --validate -f nginx-deployment-prod.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 5m<br />
nginx-deployment-prod-3051195443-hj9b1 1/1 Running 0 12m<br />
</pre><br />
<br />
master$ kubectl describe deployments -l app=nginx-deployment-dev<br />
<pre><br />
Name: nginx-deployment-dev<br />
Namespace: default<br />
CreationTimestamp: Thu, 20 Oct 2016 23:48:46 +0000<br />
Labels: app=nginx-deployment-dev<br />
Selector: app=nginx-deployment-dev<br />
Replicas: 1 updated | 1 total | 1 available | 0 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-dev-2568522567 (1/1 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment-prod 1 1 1 1 44s<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev-update.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.8 # ***CHANGED***<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl apply -f nginx-deployment-dev-update.yml<br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 0/1 ContainerCreating 0 27s<br />
</pre><br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 6m<br />
</pre><br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment nginx-deployment-dev<br />
master$ kubectl delete deployment nginx-deployment-prod<br />
<br />
===Multi-Pod (container) replication controller===<br />
<br />
* Start the other two nodes (the ones we previously stopped):<br />
minion2$ systemctl start kubelet kube-proxy<br />
minion3$ systemctl start kubelet kube-proxy<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 2h<br />
k8s.minion2.dev Ready 2h<br />
k8s.minion3.dev Ready 2h<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-multi-node.yml<br />
---<br />
apiVersion: v1<br />
kind: ReplicationController<br />
metadata:<br />
name: nginx-www<br />
spec:<br />
replicas: 3<br />
selector:<br />
app: nginx<br />
template:<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-multi-node.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 0/1 ContainerCreating 0 10s<br />
nginx-www-416ct 0/1 ContainerCreating 0 10s<br />
nginx-www-ax41w 0/1 ContainerCreating 0 10s<br />
</pre><br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 1m<br />
nginx-www-416ct 1/1 Running 0 1m<br />
nginx-www-ax41w 1/1 Running 0 1m<br />
</pre><br />
<br />
master$ kubectl describe pods | awk '/^Node/{print $2}'<br />
<pre><br />
k8s.minion2.dev/192.168.200.102<br />
k8s.minion1.dev/192.168.200.101<br />
k8s.minion3.dev/192.168.200.103<br />
</pre><br />
<br />
minion1$ docker ps # 1 nginx container running<br />
minion2$ docker ps # 1 nginx container running<br />
minion3$ docker ps # 1 nginx container running<br />
minion3$ docker ps --format "<nowiki>{{.Image}}</nowiki>"<br />
<pre><br />
nginx<br />
gcr.io/google_containers/pause:2.0<br />
</pre><br />
<br />
master$ kubectl describe replicationcontroller<br />
<pre><br />
Name: nginx-www<br />
Namespace: default<br />
Image(s): nginx<br />
Selector: app=nginx<br />
Labels: app=nginx<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
...<br />
</pre><br />
<br />
* Attempt to delete one of the three pods:<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 11m<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
master$ kubectl delete pod nginx-www-2evxu<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-3cck4 1/1 Running 0 12s<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
<br />
A new pod (<code>nginx-www-3cck4</code>) automatically started up. This is because the expected state, as defined in our YAML file, is for there to be 3 pods running at all times. Thus, if one or more of the pods were to go down, a new pod (or pods) will automatically start up to bring the state back to the expected state.<br />
<br />
* To force-delete all pods:<br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Create and deploy service definitions===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-service.yml<br />
---<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: nginx-service<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
</pre><br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
</pre><br />
master$ kubectl create -f nginx-service.yml<br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
nginx-service 10.254.110.127 <none> 8000/TCP 10s<br />
</pre><br />
<br />
master$ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i<br />
busybox$ wget -qO- 10.254.110.127:8000 # works<br />
<br />
* Cleanup<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete service nginx-service<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-jh2e9 1/1 Running 0 13m<br />
nginx-www-jir2g 1/1 Running 0 13m<br />
nginx-www-w91uw 1/1 Running 0 13m<br />
</pre><br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Creating temporary Pods at the CLI===<br />
<br />
* Make sure we have no Pods running:<br />
master$ kubectl get pods<br />
<br />
* Create temporary deployment pod:<br />
master$ kubectl run mysample --image=foobar/apache<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
mysample-1424711890-fhtxb 0/1 ContainerCreating 0 1s<br />
</pre><br />
master$ kubectl get deployment <br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
mysample 1 1 1 0 7s<br />
</pre><br />
<br />
* Create a temporary deployment pod (where we know it will fail):<br />
master$ kubectl run myexample --image=christophchamp/ubuntu_sysadmin<br />
master$ kubectl -o wide get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myexample-3534121234-mpr35 0/1 CrashLoopBackOff 12 39m k8s.minion3.dev<br />
mysample-2812764540-74c5h 1/1 Running 0 41m k8s.minion2.dev<br />
</pre><br />
<br />
* Check on why the "myexample" pod is in status "CrashLoopBackOff":<br />
master$ kubectl describe pods/myexample-3534121234-mpr35<br />
master$ kubectl describe deployments/mysample<br />
master$ kubectl describe pods/mysample-2812764540-74c5h | awk '/^Node/{print $2}'<br />
k8s.minion2.dev/192.168.200.102<br />
<br />
master$ kubectl delete deployment mysample<br />
<br />
* Run multiple replicas of the same pod:<br />
master$ kubectl run myreplicas --image=latest123/apache --replicas=2 --labels=app=myapache,version=1.0.0<br />
master$ kubectl describe deployment myreplicas <br />
<pre><br />
Name: myreplicas<br />
Namespace: default<br />
CreationTimestamp: Fri, 21 Oct 2016 19:10:30 +0000<br />
Labels: app=myapache,version=1.0.0<br />
Selector: app=myapache,version=1.0.0<br />
Replicas: 2 updated | 2 total | 1 available | 1 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: myreplicas-2209834598 (2/2 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myreplicas-2209834598-5iyer 1/1 Running 0 1m k8s.minion1.dev<br />
myreplicas-2209834598-cslst 1/1 Running 0 1m k8s.minion2.dev<br />
</pre><br />
<br />
master$ kubectl describe pods -l version=1.0.0<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myreplicas<br />
<br />
===Interacting with Pod containers===<br />
<br />
* Create example Apache pod definition file:<br />
<pre><br />
master$ cat << EOF > apache.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: apache<br />
spec:<br />
containers:<br />
- name: apache<br />
image: latest123/apache<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl create -f apache.yml<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
<br />
* Test pod and make some basic configuration changes:<br />
master$ kubectl exec apache date<br />
master$ kubectl exec mypod -i -t -- cat /var/www/html/index.html # default apache HTML<br />
master$ kubectl exec apache -i -t -- /bin/bash<br />
container$ export TERM=xterm<br />
container$ echo "xtof test" > /var/www/html/index.html<br />
minion3$ curl 172.17.0.2<br />
xtof test<br />
container$ exit<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
Pod/container is still running even after we exited (as expected).<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Logs===<br />
<br />
* Start our example Apache pod to use for checking Kubernetes logging features:<br />
master$ kubectl create -f apache.yml <br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
apache 1/1 Running 0 9s<br />
</pre><br />
master$ kubectl logs apache<br />
<pre><br />
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message<br />
</pre><br />
master$ kubectl logs --tail=10 apache<br />
master$ kubectl logs --since=24h apache # or 10s, 2m, etc.<br />
master$ kubectl logs -f apache # follow the logs<br />
master$ kubectl logs -f -c apache apache # where -c is the container ID<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Autoscaling and scaling Pods===<br />
<br />
master$ kubectl run myautoscale --image=latest123/apache --port=80 --labels=app=myautoscale<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 47s k8s.minion3.dev<br />
</pre><br />
<br />
* Create an autoscale definition:<br />
master$ kubectl autoscale deployment myautoscale --min=2 --max=6 --cpu-percent=80<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 2 2 2 2 4m<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 3m k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4s k8s.minion2.dev<br />
</pre><br />
<br />
* Scale up an already autoscaled deployment:<br />
master$ kubectl scale --current-replicas=2 --replicas=4 deployment/myautoscale<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 4 4 4 4 8m<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-2rxhp 1/1 Running 0 8s k8s.minion1.dev<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 7m k8s.minion3.dev<br />
myautoscale-3243017378-ozxs8 1/1 Running 0 8s k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4m k8s.minion2.dev<br />
</pre><br />
<br />
* Scale down:<br />
master$ kubectl scale --current-replicas=4 --replicas=2 deployment/myautoscale<br />
<br />
Note: You can not scale down past the original minimum number of pods/containers specified in the original autoscale deployment (i.e., min=2 in our example).<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myautoscale<br />
<br />
===Failure and recovery===<br />
<br />
master$ kubectl run myrecovery --image=latest123/apache --port=80 --replicas=2 --labels=app=myrecovery<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myrecovery 2 2 2 2 6s<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-5xu8f 1/1 Running 0 12s k8s.minion1.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 12s k8s.minion2.dev<br />
</pre><br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the minions/nodes (so we have a total of 2 nodes online):<br />
minion1$ systemctl stop docker kubelet kube-proxy<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-qyi04 1/1 Running 0 7m k8s.minion3.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 14m k8s.minion2.dev<br />
</pre><br />
Pod switch from minion1 to minion3.<br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the remaining online minions/nodes (so we have a total of 1 node online):<br />
minion2$ systemctl stop docker kubelet kube-proxy<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-b5tim 1/1 Running 0 2m k8s.minion3.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 17m k8s.minion3.dev<br />
</pre><br />
Both Pods are now running on minion3, the only available node.<br />
<br />
* Start up Kubernetes- and Docker-related services again on minion1 and delete one of the Pods:<br />
minion1$ systemctl start docker kubelet kube-proxy<br />
master$ kubectl delete pod myrecovery-563119102-b5tim<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-8unzg 1/1 Running 0 1m k8s.minion1.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 20m k8s.minion3.dev<br />
</pre><br />
Pods are now running on separate nodes.<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployments/myrecovery<br />
<br />
==Minikube==<br />
[https://github.com/kubernetes/minikube Minikube] is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.<br />
<br />
* Install Minikube:<br />
$ curl -Lo minikube <nowiki>https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64</nowiki> \<br />
&& chmod +x minikube && sudo mv minikube /usr/local/bin/<br />
<br />
* Install kubectl<br />
$ curl -Lo kubectl <nowiki>https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl</nowiki> \<br />
&& chmod +x kubectl && sudo mv kubectl /usr/local/bin/<br />
<br />
* Test install<br />
$ minikube start<br />
#~OR~<br />
$ minikube start --memory 4096 # give it 4GB of RAM<br />
$ minikube status<br />
$ minikube dashboard<br />
$ kubectl config view<br />
$ kubectl cluster-info<br />
<br />
NOTE: If you have an old version of minikube installed, you should probably do the following before upgrading to a much newer version:<br />
$ minikube delete --all --purge<br />
<br />
Get the details on the CLI options for kubectl [https://kubernetes.io/docs/reference/kubectl/overview/ here].<br />
<br />
Using the <code>`kubectl proxy`</code> command, kubectl will authenticate with the API Server on the Master Node and would make the dashboard available on <nowiki>http://localhost:8001/ui</nowiki>:<br />
<br />
$ kubectl proxy<br />
Starting to serve on 127.0.0.1:8001<br />
<br />
After running the above command, we can access the dashboard at <code><nowiki>http://127.0.0.1:8001/ui</nowiki></code>.<br />
<br />
Once the kubectl proxy is configured, we can send requests to localhost on the proxy port:<br />
<br />
$ curl <nowiki>http://localhost:8001/</nowiki><br />
$ curl <nowiki>http://localhost:8001/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "8",<br />
"gitVersion": "v1.8.0",<br />
"gitCommit": "0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4",<br />
"gitTreeState": "clean",<br />
"buildDate": "2017-11-29T22:43:34Z",<br />
"goVersion": "go1.9.1",<br />
"compiler": "gc",<br />
"platform": "linux/amd64"<br />
}<br />
</pre><br />
<br />
Without kubectl proxy configured, we can get the Bearer Token using kubectl, and then send it with the API request. A Bearer Token is an access token which is generated by the authentication server (the API server on the Master Node) and given back to the client. Using that token, the client can connect back to the Kubernetes API server without providing further authentication details, and then, access resources.<br />
<br />
* Get the k8s token:<br />
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | awk '/^default/{print $1}') | awk '/^token/{print $2}')<br />
<br />
* Get the k8s API server endpoint:<br />
$ APISERVER=$(kubectl config view | awk '/https/{print $2}')<br />
<br />
* Access the API Server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}<br />
<br />
===Using Minikube as a local Docker registry===<br />
<br />
Sometimes it is useful to have a local Docker registry for Kubernetes to pull images from. As the Minikube [https://github.com/kubernetes/minikube/blob/0c616a6b42b28a1aab8397f5a9061f8ebbd9f3d9/README.md#reusing-the-docker-daemon README] describes, you can reuse the Docker daemon running within Minikube with <code>eval $(minikube docker-env)</code> to build and pull images from.<br />
<br />
To use an image without uploading it to some external resgistry (e.g., Docker Hub), you can follow these steps:<br />
* Set the environment variables with <code>eval $(minikube docker-env)</code><br />
* Build the image with the Docker daemon of Minikube (e.g., <code>docker build -t my-image .</code>)<br />
* Set the image in the pod spec like the build tag (e.g., <code>my-image</code>)<br />
* Set the <code>imagePullPolicy</code> to <code>Never</code>, otherwise Kubernetes will try to download the image.<br />
<br />
Important note: You have to run <code>eval $(minikube docker-env)</code> on each terminal you want to use since it only sets the environment variables for the current shell session.<br />
<br />
===Working with our Minikube-based Kubernetes cluster===<br />
<br />
;Kubernetes Object Model<br />
<br />
Kubernetes has a very rich object model, with which it represents different persistent entities in the Kubernetes cluster. Those entities describe:<br />
<br />
* What containerized applications we are running and on which node<br />
* Application resource consumption<br />
* Different policies attached to applications, like restart/upgrade policies, fault tolerance, etc.<br />
<br />
With each object, we declare our intent or desired state using the '''spec''' field. The Kubernetes system manages the '''status''' field for objects, in which it records the actual state of the object. At any given point in time, the Kubernetes Control Plane tries to match the object's actual state to the object's desired state.<br />
<br />
Examples of Kubernetes objects are Pods, Deployments, ReplicaSets, etc.<br />
<br />
To create an object, we need to provide the '''spec''' field to the Kubernetes API Server. The '''spec''' field describes the desired state, along with some basic information, like the name. The API request to create the object must have the '''spec''' field, as well as other details, in a JSON format. Most often, we provide an object's definition in a YAML file, which is converted by kubectl in a JSON payload and sent to the API Server.<br />
<br />
Below is an example of a ''Deployment'' object:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
With the '''apiVersion''' field in the example above, we mention the API endpoint on the API Server which we want to connect to. Note that you can see what API version to use with the following call to the API server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}/apis/apps<br />
Use the '''preferredVersion''' for most cases.<br />
<br />
With the '''kind''' field, we mention the object type &mdash; in our case, we have '''Deployment'''. With the '''metadata''' field, we attach the basic information to objects, like the name. Notice that in the above we have two '''spec''' fields ('''spec''' and '''spec.template.spec'''). With '''spec''', we define the desired state of the deployment. In our example, we want to make sure that, at any point in time, at least 3 ''Pods'' are running, which are created using the Pod template defined in '''spec.template'''. In '''spec.template.spec''', we define the desired state of the Pod (here, our Pod would be created using nginx:1.7.9).<br />
<br />
Once the object is created, the Kubernetes system attaches the '''status''' field to the object.<br />
<br />
;Connecting users to Pods<br />
<br />
To access the application, a user/client needs to connect to the Pods. As Pods are ephemeral in nature, resources like IP addresses allocated to it cannot be static. Pods could die abruptly or be rescheduled based on existing requirements.<br />
<br />
As an example, consider a scenario in which a user/client is connecting to a Pod using its IP address. Unexpectedly, the Pod to which the user/client is connected dies and a new Pod is created by the controller. The new Pod will have a new IP address, which will not be known automatically to the user/client of the earlier Pod. To overcome this situation, Kubernetes provides a higher-level abstraction called ''[https://kubernetes.io/docs/concepts/services-networking/service/ Service]'', which logically groups Pods and a policy to access them. This grouping is achieved via Labels and Selectors (see above).<br />
<br />
So, for our example, we would use Selectors (e.g., "<code>app==frontend</code>" and "<code>app==db</code>") to group our Pods into two logical groups. We can assign a name to the logical grouping, referred to as a "service name". In our example, we have created two Services, <code>frontend-svc</code> and <code>db-svc</code>, and they have the "<code>app==frontend</code>" and the "<code>app==db</code>" Selectors, respectively.<br />
<br />
The following is an example of a Service object:<br />
<pre><br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: frontend-svc<br />
spec:<br />
selector:<br />
app: frontend<br />
ports:<br />
- protocol: TCP<br />
port: 80<br />
targetPort: 5000<br />
</pre><br />
<br />
in which we are creating a <code>frontend-svc</code> Service by selecting all the Pods that have the Label "<code>app</code>" equal to "<code>frontend</code>". By default, each Service also gets an IP address, which is routable only inside the cluster. In our case, we have 172.17.0.4 and 172.17.0.5 IP addresses for our <code>frontend-svc</code> and <code>db-svc</code> Services, respectively. The IP address attached to each Service is also known as the ClusterIP for that Service.<br />
<br />
+------------------------------------+<br />
| select: app==frontend | container (app:frontend; 10.0.1.3)<br />
| service=frontend-svc (172.17.0.4) |------> container (app:frontend; 10.0.1.4)<br />
+------------------------------------+ container (app:frontend; 10.0.1.5)<br />
^<br />
/<br />
/<br />
user/client<br />
\<br />
\<br />
v<br />
+------------------------------------+<br />
| select: app==db |------> container (app:db; 10.0.1.10)<br />
| service=db-svc (172.17.0.5) |<br />
+------------------------------------+<br />
<br />
The user/client now connects to a Service via ''its'' IP address, which forwards the traffic to one of the Pods attached to it. A Service does the load balancing while selecting the Pods for forwarding the data/traffic.<br />
<br />
While forwarding the traffic from the Service, we can select the target port on the Pod. In our example, for <code>frontend-svc</code>, we will receive requests from the user/client on port 80. We will then forward these requests to one of the attached Pods on port 5000. If the target port is not defined explicitly, then traffic will be forwarded to Pods on the port on which the Service receives traffic.<br />
<br />
A tuple of Pods, IP addresses, along with the <code>targetPort</code> is referred to as a ''Service Endpoint''. In our case, <code>frontend-svc</code> has 3 Endpoints: <code>10.0.1.3:5000</code>, <code>10.0.1.4:5000</code>, and <code>10.0.1.5:5000</code>.<br />
<br />
===kube-proxy===<br />
All of the Worker Nodes run a daemon called kube-proxy, which watches the API Server on the Master Node for the addition and removal of Services and endpoints. For each new Service, on each node, kube-proxy configures the IPtables rules to capture the traffic for its ClusterIP and forwards it to one of the endpoints. When the Service is removed, kube-proxy removes the IPtables rules on all nodes as well.<br />
<br />
===Service discovery===<br />
As Services are the primary mode of communication in Kubernetes, we need a way to discover them at runtime. Kubernetes supports two methods of discovering a Service:<br />
<br />
;Environment Variables : As soon as the Pod starts on any Worker Node, the kubelet daemon running on that node adds a set of environment variables in the Pod for all active Services. For example, if we have an active Service called <code>redis-master</code>, which exposes port 6379, and its ClusterIP is 172.17.0.6, then, on a newly created Pod, we can see the following environment variables:<br />
<br />
REDIS_MASTER_SERVICE_HOST=172.17.0.6<br />
REDIS_MASTER_SERVICE_PORT=6379<br />
REDIS_MASTER_PORT=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp<br />
REDIS_MASTER_PORT_6379_TCP_PORT=6379<br />
REDIS_MASTER_PORT_6379_TCP_ADDR=172.17.0.6<br />
<br />
With this solution, we need to be careful while ordering our Services, as the Pods will not have the environment variables set for Services which are created after the Pods are created.<br />
<br />
;DNS : Kubernetes has an add-on for DNS, which creates a DNS record for each Service and its format is like <code>my-svc.my-namespace.svc.cluster.local</code>. Services within the same namespace can reach other services with just their name. For example, if we add a Service <code>redis-master</code> in the <code>my-ns</code> Namespace, then all the Pods in the same Namespace can reach to the redis Service just by using its name, <code>redis-master</code>. Pods from other Namespaces can reach the Service by adding the respective Namespace as a suffix, like <code>redis-master.my-ns</code>.<br />
: This is the most common and highly recommended solution. For example, in the previous section's image, we have seen that an internal DNS is configured, which maps our services <code>frontend-svc</code> and <code>db-svc</code> to 172.17.0.4 and 172.17.0.5, respectively.<br />
<br />
===Service Type===<br />
While defining a Service, we can also choose its access scope. We can decide whether the Service:<br />
<br />
* is only accessible within the cluster;<br />
* is accessible from within the cluster and the external world; or<br />
* maps to an external entity which resides outside the cluster.<br />
<br />
Access scope is decided by ''ServiceType'', which can be mentioned when creating the Service.<br />
<br />
;ClusterIP : (the default ''ServiceType''.) A Service gets its Virtual IP address using the ClusterIP. That IP address is used for communicating with the Service and is accessible only within the cluster. <br />
<br />
;NodePort : With this ''ServiceType'', in addition to creating a ClusterIP, a port from the range '''30000-32767''' is mapped to the respective service from all the Worker Nodes. For example, if the mapped NodePort is 32233 for the service <code>frontend-svc</code>, then, if we connect to any Worker Node on port 32233, the node would redirect all the traffic to the assigned ClusterIP (172.17.0.4).<br />
: By default, while exposing a NodePort, a random port is automatically selected by the Kubernetes Master from the port range '''30000-32767'''. If we do not want to assign a dynamic port value for NodePort, then, while creating the Service, we can also give a port number from the earlier specific range.<br />
: The NodePort ServiceType is useful when we want to make our services accessible from the external world. The end-user connects to the Worker Nodes on the specified port, which forwards the traffic to the applications running inside the cluster. To access the application from the external world, administrators can configure a reverse proxy outside the Kubernetes cluster and map the specific endpoint to the respective port on the Worker Nodes.<br />
<br />
;LoadBalancer: With this ''ServiceType'', we have the following:<br />
:* NodePort and ClusterIP Services are automatically created, and the external load balancer will route to them;<br />
:* The Services are exposed at a static port on each Worker Node; and<br />
:* The Service is exposed externally using the underlying Cloud provider's load balancer feature.<br />
: The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS.<br />
<br />
;ExternalIP : A Service can be mapped to an ExternalIP address if it can route to one or more of the Worker Nodes. Traffic that is ingressed into the cluster with the ExternalIP (as destination IP) on the Service port, gets routed to one of the the Service endpoints. (Note that ExternalIPs are not managed by Kubernetes. The cluster administrator(s) must have configured the routing to map the ExternalIP address to one of the nodes.)<br />
<br />
;ExternalName : a special ''ServiceType'', which has no Selectors and does not define any endpoints. When accessed within the cluster, it returns a CNAME record of an externally configured service.<br />
: The primary use case of this ServiceType is to make externally configured services like <code>my-database.example.com</code> available inside the cluster, using just the name, like <code>my-database</code>, to other services inside the same Namespace.<br />
<br />
===Deploying a application===<br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
EOF<br />
</pre><br />
<br />
$ kubectl get service<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h<br />
web-service NodePort 10.104.107.132 <none> 80:32610/TCP 7m<br />
<br />
Note that "<code>32610</code>" port.<br />
<br />
* Get the IP address of your Minikube k8s cluster<br />
$ minikube ip<br />
192.168.99.100<br />
#~OR~<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
* Now, check that your web service is serving up a default Nginx website:<br />
$ curl -I <nowiki>http://192.168.99.100:32610</nowiki><br />
HTTP/1.1 200 OK<br />
Server: nginx/1.13.8<br />
Date: Thu, 11 Jan 2018 00:27:51 GMT<br />
Content-Type: text/html<br />
Content-Length: 612<br />
Last-Modified: Wed, 10 Jan 2018 04:10:03 GMT<br />
Connection: keep-alive<br />
ETag: "5a55921b-264"<br />
Accept-Ranges: bytes<br />
<br />
Looks good!<br />
<br />
Finally, destroy the webserver deployment:<br />
$ kubectl delete deployments webserver<br />
<br />
===Using Ingress with Minikube===<br />
<br />
* First check that the Ingress add-on is enabled:<br />
$ minikube addons list | grep ingress<br />
- ingress: disabled<br />
<br />
If it is not, enable it with:<br />
$ minikube addons enable ingress<br />
$ minikube addons list | grep ingress<br />
- ingress: enabled<br />
<br />
* Create an Echo Server Deployment:<br />
<pre><br />
$ cat << EOF >deploy-echoserver.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: echoserver<br />
template:<br />
metadata:<br />
labels:<br />
run: echoserver<br />
spec:<br />
containers:<br />
- image: gcr.io/google_containers/echoserver:1.4<br />
imagePullPolicy: IfNotPresent<br />
name: echoserver<br />
ports:<br />
- containerPort: 8080<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-echoserver.yml<br />
<br />
* Create the Cheddar cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-cheddar-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: cheddar-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:cheddar<br />
imagePullPolicy: IfNotPresent<br />
name: cheddar-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-stilton-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: stilton-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:stilton<br />
imagePullPolicy: IfNotPresent<br />
name: stilton-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create the Echo Server Service:<br />
<pre><br />
$ cat << EOF >svc-echoserver.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 31116<br />
port: 8080<br />
protocol: TCP<br />
targetPort: 8080<br />
selector:<br />
run: echoserver<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-echoserver.yml<br />
<br />
* Create the Cheddar cheese Service:<br />
<pre><br />
$ cat << EOF >svc-cheddar-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 32467<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: cheddar-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
</pre><br />
$ kubectl create --validate -f svc-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Service:<br />
<pre><br />
$ cat << EOF >svc-stilton-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 30197<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: stilton-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-stilton-cheese.yml<br />
<br />
* Create the Ingress for the above Services:<br />
<pre><br />
$ cat << EOF >ingress-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: ingress-cheese<br />
annotations:<br />
nginx.ingress.kubernetes.io/rewrite-target: /<br />
spec:<br />
backend:<br />
serviceName: default-http-backend<br />
servicePort: 80<br />
rules:<br />
- host: myminikube.info<br />
http:<br />
paths:<br />
- path: /<br />
backend:<br />
serviceName: echoserver<br />
servicePort: 8080<br />
- host: cheeses.all<br />
http:<br />
paths:<br />
- path: /stilton<br />
backend:<br />
serviceName: stilton-cheese<br />
servicePort: 80<br />
- path: /cheddar<br />
backend:<br />
serviceName: cheddar-cheese<br />
servicePort: 80<br />
</pre><br />
$ kubectl create --validate -f ingress-cheese.yml<br />
<br />
* Check that everything is up:<br />
<pre><br />
$ kubectl get all<br />
NAME READY STATUS RESTARTS AGE<br />
pod/cheddar-cheese-d6d6587c7-4bgcz 1/1 Running 0 12m<br />
pod/echoserver-55f97d5bff-pdv65 1/1 Running 0 12m<br />
pod/stilton-cheese-6d64cbc79-g7h4w 1/1 Running 0 12m<br />
<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
service/cheddar-cheese NodePort 10.109.238.92 <none> 80:32467/TCP 12m<br />
service/echoserver NodePort 10.98.60.194 <none> 8080:31116/TCP 12m<br />
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h<br />
service/stilton-cheese NodePort 10.108.175.207 <none> 80:30197/TCP 12m<br />
<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
deployment.apps/cheddar-cheese 1 1 1 1 12m<br />
deployment.apps/echoserver 1 1 1 1 12m<br />
deployment.apps/stilton-cheese 1 1 1 1 12m<br />
<br />
NAME DESIRED CURRENT READY AGE<br />
replicaset.apps/cheddar-cheese-d6d6587c7 1 1 1 12m<br />
replicaset.apps/echoserver-55f97d5bff 1 1 1 12m<br />
replicaset.apps/stilton-cheese-6d64cbc79 1 1 1 12m<br />
<br />
$ kubectl get ing<br />
NAME HOSTS ADDRESS PORTS AGE<br />
ingress-cheese myminikube.info,cheeses.all 10.0.2.15 80 12m<br />
</pre><br />
<br />
* Add your host aliases:<br />
$ echo "$(minikube ip) myminikube.info cheeses.all" | sudo tee -a /etc/hosts<br />
<br />
* Now, either using your browser or [[curl]], check that you can reach all of the endpoints defined in the Ingress:<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/cheddar/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/stilton/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null myminikube.info # Should return '200'<br />
<br />
* You can also see the Nginx logs for the above requests with:<br />
$ kubectl --namespace kube-system logs \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller<br />
<br />
* You can also view the Nginx configuration file (and the settings created by the above Ingress) with:<br />
$ NGINX_POD=$(kubectl --namespace kube-system get pods \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller \<br />
--output jsonpath='{.items[0].metadata.name}')<br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- cat /etc/nginx/nginx.conf<br />
<br />
* Get the version of the Nginx Ingress controller installed:<br />
<pre><br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- /nginx-ingress-controller --version<br />
-------------------------------------------------------------------------------<br />
NGINX Ingress controller<br />
Release: 0.19.0<br />
Build: git-05025d6<br />
Repository: https://github.com/kubernetes/ingress-nginx.git<br />
-------------------------------------------------------------------------------<br />
</pre><br />
<br />
==Kubectl==<br />
<br />
<code>kubectl</code> controls the Kubernetes cluster manager.<br />
<br />
* View your current configuration:<br />
$ kubectl config view<br />
<br />
* Switch between clusters:<br />
$ kubectl config use-context <context_name><br />
<br />
* Remove a cluster:<br />
$ kubectl config unset contexts.<context_name><br />
$ kubectl config unset users.<user_name><br />
$ kubectl config unset clusters.<cluster_name><br />
<br />
* Sort Pods by age:<br />
$ kubectl get po --sort-by='{.firstTimestamp}'.<br />
$ kubectl get pods --all-namespaces --sort-by=.metadata.creationTimestamp<br />
<br />
* Backup all primitives deployed in a given k8s cluster:<br />
<pre><br />
$ kubectl api-resources --verbs=list --namespaced -o name \<br />
| xargs -n1 -I{} bash -c "kubectl get {} --all-namespaces -oyaml && echo ---" \<br />
> k8s_backup.yaml<br />
</pre><br />
<br />
===kubectl explain===<br />
<br />
;List the fields for supported resources.<br />
<br />
* Get the documentation of a resource (aka "kind") and its fields:<br />
<pre><br />
$ kubectl explain deployment<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
APIVersion defines the versioned schema of this representation of an<br />
object. Servers should convert recognized schemas to the latest internal<br />
value, and may reject unrecognized values. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources<br />
<br />
kind <string><br />
Kind is a string value representing the REST resource this object<br />
represents. Servers may infer this from the endpoint the client submits<br />
requests to. Cannot be updated. In CamelCase. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds<br />
<br />
metadata <Object><br />
Standard object metadata.<br />
<br />
spec <Object><br />
Specification of the desired behavior of the Deployment.<br />
<br />
status <Object><br />
Most recently observed status of the Deployment<br />
</pre><br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
</pre><br />
<br />
* Get a list of ''all'' allowable fields for a given primitive:<br />
<pre><br />
$ kubectl explain deployment --recursive | head<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
kind <string><br />
metadata <Object><br />
</pre><br />
<br />
* Get documentation ("man page"-style) for a given field in a given primitive:<br />
<pre><br />
$ kubectl explain deployment.status.availableReplicas<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
FIELD: availableReplicas <integer><br />
<br />
DESCRIPTION:<br />
Total number of available pods (ready for at least minReadySeconds)<br />
targeted by this deployment.<br />
</pre><br />
<br />
===Merge kubeconfig files===<br />
<br />
* Reference which kubeconfig files you wish to merge:<br />
$ export KUBECONFIG=$HOME/.kube/dev.yaml:$HOME/.kube/prod.yaml<br />
<br />
* Flatten them:<br />
$ kubectl config view --flatten >> $HOME/.kube/config<br />
<br />
* Unset:<br />
$ unset KUBECONFIG<br />
<br />
Merge complete.<br />
<br />
==Namespaces==<br />
<br />
See: [https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces] in the official documentation.<br />
<br />
; Create a Namespace<br />
<br />
<pre><br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: dev<br />
</pre><br />
<br />
==Pods==<br />
<br />
; Create a Pod that has an Init Container<br />
<br />
In this example, I will create a Pod that has one application Container and one Init Container. The init container runs to completion before the application container starts.<br />
<br />
<pre><br />
$ cat << EOF >init-demo.yml<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: init-demo<br />
labels:<br />
app: demo<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: /usr/share/nginx/html<br />
# These containers are run during pod initialization<br />
initContainers:<br />
- name: install<br />
image: busybox<br />
command:<br />
- wget<br />
- "-O"<br />
- "/work-dir/index.html"<br />
- https://example.com<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: "/work-dir"<br />
dnsPolicy: Default<br />
volumes:<br />
- name: workdir<br />
emptyDir: {}<br />
EOF<br />
</pre><br />
<br />
The above Pod YAML will first create the init container using the busybox image, which will download the HTML of the example.com website and save it to a file (<code>index.html</code>) on the Pod volume called "workdir". After the init container completes, the Nginx container starts and presents the <code>index.html</code> on port 80 (the file is located at <code>/usr/share/nginx/index.html</code> inside the Nginx container as a volume mount).<br />
<br />
* Now, create this Pod:<br />
$ kubectl create --validate -f init-demo.yml<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat << EOF >example.yml<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: example<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: demo<br />
</pre><br />
<br />
* Check that we can get the header of <nowiki>https://example.com</nowiki>:<br />
$ curl -sI $(kubectl get svc/foo-svc -o jsonpath='{.spec.clusterIP}'):8000 | grep ^HTTP<br />
HTTP/1.1 200 OK<br />
<br />
==Deployments==<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' controller provides declarative updates for Pods and ReplicaSets.<br />
<br />
You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.<br />
<br />
; Creating a Deployment<br />
<br />
The following is an example of a Deployment. It creates a ReplicaSet to bring up three [https://hub.docker.com/_/nginx/ Nginx] Pods:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Check the syntax of the Deployment (YAML):<br />
$ kubectl create -f nginx-deployment.yml --dry-run<br />
deployment.apps/nginx-deployment created (dry run)<br />
<br />
* Create the Deployment:<br />
$ kubectl create --record -f nginx-deployment.yml <br />
deployment "nginx-deployment" created<br />
Note: By appending <code>--record</code> to the above command, we are telling the API to record the current command in the annotations of the created or updated resource. This is useful for future review, such as investigating which commands were executed in each Deployment revision.<br />
<br />
* Get information about our Deployment:<br />
$ kubectl get deployments<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 24s<br />
<br />
$ kubectl describe deployment/nginx-deployment<br />
<pre><br />
Name: nginx-deployment<br />
Namespace: default<br />
CreationTimestamp: Tue, 30 Jan 2018 23:28:43 +0000<br />
Labels: app=nginx<br />
Annotations: deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Selector: app=nginx<br />
Replicas: 3 desired | 3 updated | 3 total | 0 available | 3 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 25% max unavailable, 25% max surge<br />
Pod Template:<br />
Labels: app=nginx<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Conditions:<br />
Type Status Reason<br />
---- ------ ------<br />
Available False MinimumReplicasUnavailable<br />
Progressing True ReplicaSetUpdated<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-6c54bd5869 (3/3 replicas created)<br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal ScalingReplicaSet 28s deployment-controller Scaled up replica set nginx-deployment-6c54bd5869 to 3<br />
</pre><br />
<br />
* Get information about the ReplicaSet created by the above Deployment:<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-6c54bd5869 3 3 3 3m<br />
<br />
$ kubectl describe rs/nginx-deployment-6c54bd5869<br />
<pre><br />
Name: nginx-deployment-6c54bd5869<br />
Namespace: default<br />
Selector: app=nginx,pod-template-hash=2710681425<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Annotations: deployment.kubernetes.io/desired-replicas=3<br />
deployment.kubernetes.io/max-replicas=4<br />
deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Controlled By: Deployment/nginx-deployment<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-k9mh4<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-pphjt<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-n4fj5<br />
</pre><br />
<br />
* Get information about the Pods created by this Deployment:<br />
$ kubectl get pods --show-labels -l app=nginx -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE LABELS<br />
nginx-deployment-6c54bd5869-k9mh4 1/1 Running 0 5m 10.244.1.5 k8s.worker1.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-n4fj5 1/1 Running 0 5m 10.244.1.6 k8s.worker2.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-pphjt 1/1 Running 0 5m 10.244.1.7 k8s.worker3.local app=nginx,pod-template-hash=2710681425<br />
<br />
;Updating a Deployment<br />
<br />
Note: A Deployment's rollout is triggered if, and only if, the Deployment's pod template (that is, <code>.spec.template</code>) is changed (for example, if the labels or container images of the template are updated). Other updates, such as scaling the Deployment, do not trigger a rollout.<br />
<br />
Suppose that we want to update the Nginx Pods in the above Deployment to use the <code>nginx:1.9.1</code> image instead of the <code>nginx:1.7.9</code> image.<br />
<br />
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
deployment "nginx-deployment" image updated<br />
<br />
Alternatively, we can edit the Deployment and change <code>.spec.template.spec.containers[0].image</code> from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>:<br />
<br />
$ kubectl edit deployment/nginx-deployment<br />
deployment "nginx-deployment" edited<br />
<br />
* Check on the rollout status:<br />
<pre><br />
$ kubectl rollout status deployment/nginx-deployment<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
deployment "nginx-deployment" successfully rolled out<br />
</pre><br />
<br />
* Get information about the updated Deployment:<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 18m<br />
<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-5964dfd755 3 3 3 1m # <- new ReplicaSet using nginx:1.9.1<br />
nginx-deployment-6c54bd5869 0 0 0 17m # <- old ReplicaSet using nginx:1.7.9<br />
<br />
$ kubectl rollout history deployment/nginx-deployment<br />
deployments "nginx-deployment"<br />
REVISION CHANGE-CAUSE<br />
1 kubectl create --record=true --filename=nginx-deployment.yml<br />
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
<br />
$ kubectl rollout history deployment/nginx-deployment --revision=2<br />
<br />
deployments "nginx-deployment" with revision #2<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=1520898311<br />
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
Containers:<br />
nginx:<br />
Image: nginx:1.9.1<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
<br />
; Rolling back to a previous revision<br />
<br />
Undo the current rollout and rollback to the previous revision:<br />
$ kubectl rollout undo deployment/nginx-deployment<br />
deployment "nginx-deployment" rolled back<br />
<br />
Alternatively, you can rollback to a specific revision by specify that in --to-revision:<br />
$ kubectl rollout undo deployment/nginx-deployment --to-revision=1<br />
deployment "nginx-deployment" rolled back<br />
<br />
==Volume management==<br />
On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. First, when a container crashes, kubelet will restart it, but the files will be lost (i.e., the container starts with a clean state). Second, when running containers together in a Pod it is often necessary to share files between those containers. The Kubernetes ''[https://kubernetes.io/docs/concepts/storage/volumes/ Volumes]'' abstraction solves both of these problems. A Volume is essentially a directory backed by a storage medium. The storage medium and its content are determined by the Volume Type.<br />
<br />
In Kubernetes, a Volume is attached to a Pod and shared among the containers of that Pod. The Volume has the same life span as the Pod, and it outlives the containers of the Pod &mdash; this allows data to be preserved across container restarts.<br />
<br />
Kubernetes resolves the problem of persistent storage with the Persistent Volume subsystem, which provides APIs for users and administrators to manage and consume storage. To manage the Volume, it uses the PersistentVolume (PV) API resource type, and to consume it, it uses the PersistentVolumeClaim (PVC) API resource type.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes PersistentVolume] (PV) : a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims PersistentVolumeClaim] (PVC) : a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Persistent Volume Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).<br />
<br />
A Persistent Volume is a network-attached storage in the cluster, which is provisioned by the administrator.<br />
<br />
Persistent Volumes can be provisioned statically by the administrator, or dynamically, based on the StorageClass resource. A StorageClass contains pre-defined provisioners and parameters to create a Persistent Volume.<br />
<br />
A PersistentVolumeClaim (PVC) is a request for storage by a user. Users request Persistent Volume resources based on size, access modes, etc. Once a suitable Persistent Volume is found, it is bound to a Persistent Volume Claim. After a successful bind, the Persistent Volume Claim resource can be used in a Pod. Once a user finishes its work, the attached Persistent Volumes can be released. The underlying Persistent Volumes can then be reclaimed and recycled for future usage. See [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims Persistent Volumes] for details.<br />
<br />
;Access Modes<br />
* Each of the following access modes ''must'' be supported by storage resource provider (e.g., NFS, AWS EBS, etc.) if they are to be used.<br />
* ReadWriteOnce (RWO) &mdash; volume can be mounted as read/write by one node only.<br />
* ReadOnlyMany (ROX) &mdash; volume can be mounted read-only by many nodes.<br />
* ReadWriteMany (RWX) &mdash; volume can be mounted read/write by many nodes.<br />
A volume can only be mounted using one access mode at a time, regardless of the modes that are supported.<br />
<br />
; Example #1 - Using Host Volumes<br />
As an example of how to use volumes, we can modify our previous "webserver" Deployment (see above) to look like the following:<br />
<br />
$ cat webserver.yml<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: hostvol<br />
mountPath: /usr/share/nginx/html<br />
volumes:<br />
- name: hostvol<br />
hostPath:<br />
path: /home/docker/vol<br />
</pre><br />
<br />
And use the same Service:<br />
$ cat webserver-svc.yml<br />
<pre><br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
</pre><br />
<br />
Then create the deployment and service:<br />
$ kubectl create -f webserver.yml<br />
$ kubectl create -f webserver-svc.yml<br />
<br />
Then, SSH into the webserver and run the following commands<br />
$ minikube ssh<br />
minikube> mkdir -p /home/docker/vol<br />
minikube> echo "Christoph testing" > /home/docker/vol/index.html<br />
minikube> exit<br />
<br />
Get the webserver IP and port:<br />
$ minikube ip<br />
192.168.99.100<br />
$ kubectl get svc/web-service -o json | jq '.spec.ports[].nodePort'<br />
32610<br />
# OR<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
$ curl <nowiki>http://192.168.99.100:32610</nowiki><br />
Christoph testing<br />
<br />
; Example #2 - Using NFS<br />
<br />
* First, create a server to host your NFS server (e.g., <code>`sudo apt-get install -y nfs-kernel-server`</code>).<br />
* On your NFS server, do the following:<br />
$ mkdir -p /var/nfs/general<br />
$ cat << EOF >>/etc/exports<br />
/var/nfs/general 10.100.1.2(rw,sync,no_subtree_check) 10.100.1.3(rw,sync,no_subtree_check) 10.100.1.4(rw,sync,no_subtree_check)<br />
EOF<br />
where the <code>10.x</code> IPs are the private IPs of your k8s nodes (both Master and Worker nodes).<br />
* Make sure to install <code>nfs-common</code> on each of the k8s nodes that will be connecting to the NFS server.<br />
<br />
Now, on the k8s Master node, create a Persistent Volume (PV) and Persistent Volume Claim (PVC):<br />
<br />
* Create a Persistent Volume (PV):<br />
$ cat << EOF >pv.yml<br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 10.100.1.10 # NFS Server's private IP<br />
readOnly: false<br />
EOF<br />
$ kubectl create --validate -f pv.yml<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Available<br />
* Create a Persistent Volume Claim (PVC):<br />
$ cat << EOF >pvc.yml<br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
EOF<br />
$ kubectl create --validate -f pvc.yml<br />
$ kubectl get pvc<br />
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE<br />
nfs-pvc Bound mypv 1Gi RWX<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Bound default/nfs-pvc 11m<br />
<br />
* Create a Pod:<br />
$ cat << EOF >nfs-pod.yml <br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nfs-pod<br />
labels:<br />
name: nfs-pod<br />
spec:<br />
containers:<br />
- name: nfs-ctn<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
volumeMounts:<br />
- name: nfsvol<br />
mountPath: /tmp<br />
restartPolicy: Always<br />
securityContext:<br />
fsGroup: 65534<br />
runAsUser: 65534<br />
volumes:<br />
- name: nfsvol<br />
persistentVolumeClaim:<br />
claimName: nfs-pvc<br />
EOF<br />
$ kubectl create --validate -f nfs-pod.yml<br />
$ kubectl get pods -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE<br />
busybox 1/1 Running 9 2d 10.244.2.22 k8s.worker01.local<br />
<br />
* Get a shell from the <code>nfs-pod</code> Pod:<br />
$ kubectl exec -it nfs-pod -- sh<br />
/ $ df -h<br />
Filesystem Size Used Available Use% Mounted on<br />
172.31.119.58:/var/nfs/general<br />
19.3G 1.8G 17.5G 9% /tmp<br />
...<br />
/ $ touch /tmp/this-is-from-the-pod<br />
<br />
* On the NFS server:<br />
$ ls -l /var/nfs/general/<br />
total 0<br />
-rw-r--r-- 1 nobody nogroup 0 Jan 18 23:32 this-is-from-the-pod<br />
<br />
It works!<br />
<br />
==ConfigMaps and Secrets==<br />
While deploying an application, we may need to pass such runtime parameters like configuration details, passwords, etc. For example, let's assume we need to deploy ten different applications for our customers, and, for each customer, we just need to change the name of the company in the UI. Instead of creating ten different Docker images for each customer, we can just use the template image and pass the customers' names as a runtime parameter. In such cases, we can use the ConfigMap API resource. Similarly, when we want to pass sensitive information, we can use the Secret API resource. Think ''Secrets'' (for confidential data) and ''ConfigMaps'' (for non-confidential data).<br />
<br />
[https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/ ConfigMaps] allow you to decouple configuration artifacts from image content to keep containerized applications portable. Using ConfigMaps, we can pass configuration details as key-value pairs, which can be later consumed by Pods or any other system components, such as controllers. We can create ConfigMaps in two ways:<br />
<br />
* From literal values; and<br />
* From files.<br />
<br />
<br />
;ConfigMaps<br />
<br />
* Create a ConfigMap:<br />
$ kubectl create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2<br />
configmap "my-config" created<br />
$ kubectl get configmaps my-config -o yaml<br />
<pre><br />
apiVersion: v1<br />
data:<br />
key1: value1<br />
key2: value2<br />
kind: ConfigMap<br />
metadata:<br />
creationTimestamp: 2018-01-11T23:57:44Z<br />
name: my-config<br />
namespace: default<br />
resourceVersion: "117110"<br />
selfLink: /api/v1/namespaces/default/configmaps/my-config<br />
uid: 37a43e39-f72b-11e7-8370-08002721601f<br />
</pre><br />
$ kubectl describe configmap/my-config<br />
<pre><br />
Name: my-config<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Data<br />
====<br />
key2:<br />
----<br />
value2<br />
key1:<br />
----<br />
value1<br />
Events: <none><br />
</pre><br />
<br />
; Create a ConfigMap from a configuration file<br />
<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: ConfigMap<br />
metadata:<br />
name: customer1<br />
data:<br />
TEXT1: Customer1_Company<br />
TEXT2: Welcomes You<br />
COMPANY: Customer1 Company Technology, LLC.<br />
EOF<br />
</pre><br />
<br />
We can get the values of the given key as environment variables inside a Pod. In the following example, while creating the Deployment, we are assigning values for environment variables from the customer1 ConfigMap:<br />
<pre><br />
....<br />
containers:<br />
- name: my-app<br />
image: foobar<br />
env:<br />
- name: MONGODB_HOST<br />
value: mongodb<br />
- name: TEXT1<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT1<br />
- name: TEXT2<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT2<br />
- name: COMPANY<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: COMPANY<br />
....<br />
</pre><br />
With the above, we will get the <code>TEXT1</code> environment variable set to <code>Customer1_Company</code>, <code>TEXT2</code> environment variable set to <code>Welcomes You</code>, and so on.<br />
<br />
We can also mount a ConfigMap as a Volume inside a Pod. For each key, we will see a file in the mount path and the content of that file become the respective key's value. For details, see [https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#adding-configmap-data-to-a-volume here].<br />
<br />
You can also use ConfigMaps to configure your cluster to use, as an example, 8.8.8.8 and 8.8.4.4 as its upstream DNS server:<br />
<pre><br />
kind: ConfigMap<br />
apiVersion: v1<br />
metadata:<br />
name: kube-dns<br />
namespace: kube-system<br />
data:<br />
upstreamNameservers: |<br />
["8.8.8.8", "8.8.4.4"]<br />
</pre><br />
<br />
; Secrets<br />
<br />
Objects of type [https://kubernetes.io/docs/concepts/configuration/secret/ Secret] are intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a Secret is safer and more flexible than putting it verbatim in a pod definition or in a docker image.<br />
<br />
As an example, assume that we have a Wordpress blog application, in which our <code>wordpress</code> frontend connects to the [[MySQL]] database backend using a password. While creating the Deployment for <code>wordpress</code>, we can put the MySQL password in the Deployment's YAML file, but the password would not be protected. The password would be available to anyone who has access to the configuration file.<br />
<br />
In situations such as the one we just mentioned, the Secret object can help. With Secrets, we can share sensitive information like passwords, tokens, or keys in the form of key-value pairs, similar to ConfigMaps; thus, we can control how the information in a Secret is used, reducing the risk for accidental exposures. In Deployments or other system components, the Secret object is ''referenced'', without exposing its content.<br />
<br />
It is important to keep in mind that the Secret data is stored as plain text inside etcd. Administrators must limit the access to the API Server and etcd.<br />
<br />
To create a Secret using the <code>`kubectl create secret`</code> command, we need to first create a file with a password, and then pass it as an argument.<br />
<br />
* Create a file with your MySQL password:<br />
$ echo mysqlpasswd | tr -d '\n' > password.txt<br />
<br />
* Create the ''Secret'':<br />
$ kubectl create secret generic mysql-passwd --from-file=password.txt<br />
$ kubectl describe secret/mysql-passwd<br />
<pre><br />
Name: mysql-passwd<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Type: Opaque<br />
<br />
Data<br />
====<br />
password.txt: 11 bytes<br />
</pre><br />
<br />
We can also create a Secret manually, using the YAML configuration file. With Secrets, each object data must be encoded using base64. If we want to have a configuration file for our Secret, we must first get the base64 encoding for our password:<br />
<br />
$ cat password.txt | base64<br />
bXlzcWxwYXNzd2Q==<br />
<br />
and then use it in the configuration file:<br />
<pre><br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysql-passwd<br />
type: Opaque<br />
data:<br />
password: bXlzcWxwYXNzd2Q=<br />
</pre><br />
Note that base64 encoding does not do any encryption and anyone can easily decode it:<br />
<br />
$ echo "bXlzcWxwYXNzd2Q=" | base64 -d # => mysqlpasswd<br />
<br />
Therefore, make sure you do not commit a Secret's configuration file in the source code.<br />
<br />
We can get Secrets to be used by containers in a Pod by mounting them as data volumes, or by exposing them as environment variables.<br />
<br />
We can reference a Secret and assign the value of its key as an environment variable (<code>WORDPRESS_DB_PASSWORD</code>):<br />
<pre><br />
.....<br />
spec:<br />
containers:<br />
- image: wordpress:4.7.3-apache<br />
name: wordpress<br />
env:<br />
- name: WORDPRESS_DB_HOST<br />
value: wordpress-mysql<br />
- name: WORDPRESS_DB_PASSWORD<br />
valueFrom:<br />
secretKeyRef:<br />
name: my-password<br />
key: password.txt<br />
.....<br />
</pre><br />
<br />
Or, we can also mount a Secret as a Volume inside a Pod. A file would be created for each key mentioned in the Secret, whose content would be the respective value. See [https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod here] for details.<br />
<br />
==Ingress==<br />
Among the ServiceTypes mentioned earlier, NodePort and LoadBalancer are the most often used. For the LoadBalancer ServiceType, we need to have the support from the underlying infrastructure. Even after having the support, we may not want to use it for every Service, as LoadBalancer resources are limited and they can increase costs significantly. Managing the NodePort ServiceType can also be tricky at times, as we need to keep updating our proxy settings and keep track of the assigned ports. In this section, we will explore the Ingress API object, which is another method we can use to access our applications from the external world.<br />
<br />
An ''[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingress]'' is a collection of rules that allow inbound connections to reach the cluster Services. With Services, routing rules are attached to a given Service. They exist for as long as the Service exists. If we can somehow decouple the routing rules from the application, we can then update our application without worrying about its external access. This can be done using the Ingress resource. Ingress can provide load balancing, SSL/TLS termination, and name-based virtual hosting and/or routing.<br />
<br />
To allow the inbound connection to reach the cluster Services, Ingress configures a Layer 7 HTTP load balancer for Services and provides the following:<br />
<br />
* TLS (Transport Layer Security)<br />
* Name-based virtual hosting <br />
* Path-based routing<br />
* Custom rules.<br />
<br />
With Ingress, users do not connect directly to a Service. Users reach the Ingress endpoint, and, from there, the request is forwarded to the respective Service. You can see an example of an example Ingress definition below:<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: web-ingress<br />
spec:<br />
rules:<br />
- host: blue.example.com<br />
http:<br />
paths:<br />
- backend: <br />
serviceName: blue-service<br />
servicePort: 80<br />
- host: green.example.com<br />
http:<br />
paths:<br />
- backend:<br />
serviceName: green-service<br />
servicePort: 80<br />
</pre><br />
<br />
According to the example just provided, users requests to both <code>blue.example.com</code> and <code>green.example.com</code> would go to the same Ingress endpoint, and, from there, they would be forwarded to <code>blue-service</code>, and <code>green-service</code>, respectively. Here, we have seen an example of a Name-Based Virtual Hosting Ingress rule. <br />
<br />
We can also have Fan Out Ingress rules, in which we send requests like <code>example.com/blue</code> and <code>example.com/green</code>, which would be forwarded to <code>blue-service</code> and <code>green-service</code>, respectively.<br />
<br />
To secure an Ingress, you must create a ''Secret''. The TLS secret must contain keys named <code>tls.crt</code> and <code>tls.key</code>, which contain the certificate and private key to use for TLS.<br />
<br />
The Ingress resource does not do any request forwarding by itself. All of the magic is done using the ''Ingress Controller''.<br />
<br />
; Ingress Controller<br />
<br />
An Ingress Controller is an application which watches the Master Node's API Server for changes in the Ingress resources and updates the Layer 7 load balancer accordingly. Kubernetes has different Ingress Controllers, and, if needed, we can also build our own. GCE L7 Load Balancer and Nginx Ingress Controller are examples of Ingress Controllers.<br />
<br />
Minikube v0.14.0 and above ships the Nginx Ingress Controller setup as an add-on. It can be easily enabled by running the following command:<br />
<br />
$ minikube addons enable ingress<br />
<br />
Once the Ingress Controller is deployed, we can create an Ingress resource using the <code>kubectl create</code> command. For example, if we create an <code>example-ingress.yml</code> file with the content above, then, we can use the following command to create an Ingress resource:<br />
<br />
$ kubectl create -f example-ingress.yml<br />
<br />
With the Ingress resource we just created, we should now be able to access the blue-service or green-service services using blue.example.com and green.example.com URLs. As our current setup is on minikube, we will need to update the host configuration file on our workstation to the minikube's IP for those URLs:<br />
<br />
$ cat /etc/hosts<br />
127.0.0.1 localhost<br />
::1 localhost<br />
192.168.99.100 blue.example.com green.example.com <br />
<br />
Once this is done, we can now open blue.example.com and green.example.com in a browser and access the application.<br />
<br />
==Labels and Selectors==<br />
''[https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ Labels]'' are key-value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key-value labels defined. Each key must be unique for a given object.<br />
<pre><br />
"labels": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
<br />
;Syntax and character set<br />
<br />
Labels are key-value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (<code>/</code>). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (<code>.</code>), not longer than 253 characters in total, followed by a slash (<code>/</code>). If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. kube-scheduler, kube-controller-manager, kube-apiserver, kubectl, or other third-party automation) which add labels to end-user objects must specify a prefix. The <code>kubernetes.io/</code> prefix is reserved for Kubernetes core components.<br />
<br />
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between.<br />
<br />
;Label selectors<br />
<br />
Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).<br />
<br />
Via a label selector, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.<br />
<br />
The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical AND (<code>&&</code>) operator.<br />
<br />
An empty label selector (that is, one with zero requirements) selects every object in the collection.<br />
<br />
A null label selector (which is only possible for optional selector fields) selects no objects.<br />
<br />
Note: the label selectors of two controllers must not overlap within a namespace, otherwise they will fight with each other.<br />
Note that labels are not restricted to pods. You can apply them to all sorts of objects, such as nodes or services.<br />
<br />
;Examples<br />
<br />
* Label a given node:<br />
$ kubectl label node k8s.worker1.local network=gigabit<br />
<br />
* With ''Equality-based'', one may write:<br />
$ kubectl get pods -l environment=production,tier=frontend<br />
<br />
* Using ''set-based'' requirements:<br />
$ kubectl get pods -l 'environment in (production),tier in (frontend)'<br />
<br />
* Implement the OR operator on values:<br />
$ kubectl get pods -l 'environment in (production, qa)'<br />
<br />
* Restricting negative matching via exists operator:<br />
$ kubectl get pods -l 'environment,environment notin (frontend)'<br />
<br />
* Show the current labels on your pods:<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Add a label to an already running/existing pod:<br />
$ kubectl label pods busybox owner=christoph<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d owner=christoph<br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Select a pod by its label:<br />
$ kubectl get pods --selector owner=christoph<br />
#~OR~<br />
$ kubectl get pods -l owner=christoph<br />
NAME READY STATUS RESTARTS AGE<br />
busybox 1/1 Running 25 9d<br />
<br />
* Delete/remove a given label from a given pod:<br />
$ kubectl label pod busybox owner-<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
<br />
* Get all pods that belong to both the <code>production</code> ''and'' the <code>development</code> environments:<br />
$ kubectl get pods -l 'env in (production, development)'<br />
<br />
; Using Labels to select a Node on which to schedule a Pod:<br />
<br />
* Label a Node that uses SSDs as its primary HDD:<br />
$ kubectl label node k8s.worker1.local hdd=ssd<br />
<br />
<pre><br />
$ cat << EOF >busybox.yml<br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "300"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
nodeSelector: <br />
hdd: ssd<br />
EOF<br />
</pre><br />
<br />
==Annotations==<br />
With ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ Annotations]'', we can attach arbitrary, non-identifying metadata to objects, in a key-value format:<br />
<br />
<pre><br />
"annotations": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.<br />
<br />
In contrast to Labels, annotations are not used to identify and select objects. Annotations can be used to:<br />
<br />
* Store build/release IDs, which git branch, etc.<br />
* Phone numbers of persons responsible or directory entries specifying where such information can be found<br />
* Pointers to logging, monitoring, analytics, audit repositories, debugging tools, etc.<br />
* Etc.<br />
<br />
For example, while creating a Deployment, we can add a description like the one below:<br />
<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
annotations:<br />
description: Deployment based PoC dates 12 January 2018<br />
....<br />
....<br />
</pre><br />
<br />
We can look at annotations while describing an object:<br />
<br />
<pre><br />
$ kubectl describe deployment webserver<br />
Name: webserver<br />
Namespace: default<br />
CreationTimestamp: Fri, 12 Jan 2018 13:18:23 -0800<br />
Labels: app=webserver<br />
Annotations: deployment.kubernetes.io/revision=1<br />
description=Deployment based PoC dates 12 January 2018<br />
...<br />
...<br />
</pre><br />
<br />
==Jobs and CronJobs==<br />
<br />
===Jobs===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#what-is-a-job Job]'' creates one or more pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the Job itself is complete. Deleting a Job will cleanup the pods it created.<br />
<br />
A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).<br />
<br />
A Job can also be used to run multiple Pods in parallel.<br />
<br />
; Example<br />
<br />
* Below is an example ''Job'' config. It computes π to 2000 places and prints it out. It takes around 10 seconds to complete.<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
$ kubctl create -f ./job-pi.yml<br />
job "pi" created<br />
$ kubectl describe jobs/pi<br />
<pre><br />
Name: pi<br />
Namespace: default<br />
Selector: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Annotations: <none><br />
Parallelism: 1<br />
Completions: 1<br />
Start Time: Fri, 12 Jan 2018 13:25:23 -0800<br />
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Containers:<br />
pi:<br />
Image: perl<br />
Port: <none><br />
Command:<br />
perl<br />
-Mbignum=bpi<br />
-wle<br />
print bpi(2000)<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 8s job-controller Created pod: pi-rfvvw<br />
</pre><br />
<br />
* Get the result of the Job run (i.e., the value of π):<br />
$ pods=$(kubectl get pods --show-all --selector=job-name=pi --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
pi-rfvvw<br />
$ kubectl logs ${pods}<br />
3.1415926535897932384626433832795028841971693...<br />
<br />
===CronJobs===<br />
<br />
Support for creating ''Jobs'' at specified times/dates (i.e. cron) is available in Kubernetes 1.4. See [https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ here] for details.<br />
<br />
Below is an example ''CronJob''. Every minute, it runs a simple job to print current time and then echo a "hello" string:<br />
$ cat << EOF >cronjob.yml<br />
apiVersion: batch/v1beta1<br />
kind: CronJob<br />
metadata:<br />
name: hello<br />
spec:<br />
schedule: "*/1 * * * *"<br />
jobTemplate:<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: hello<br />
image: busybox<br />
args:<br />
- /bin/sh<br />
- -c<br />
- date; echo Hello from the Kubernetes cluster<br />
restartPolicy: OnFailure<br />
EOF<br />
<br />
$ kubectl create -f cronjob.yml<br />
cronjob "hello" created<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 <none> 11s<br />
<br />
$ kubectl get jobs --watch<br />
NAME DESIRED SUCCESSFUL AGE<br />
hello-1515793140 1 1 7s<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 22s 48s<br />
<br />
$ pods=$(kubectl get pods -a --selector=job-name=hello-1515793140 --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
hello-1515793140-plp8g<br />
<br />
$ kubectl logs $pods<br />
Fri Jan 12 21:39:07 UTC 2018<br />
Hello from the Kubernetes cluster<br />
<br />
* Cleanup<br />
$ kubectl delete cronjob hello<br />
<br />
==Quota Management==<br />
When there are many users sharing a given Kubernetes cluster, there is always a concern for fair usage. To address this concern, administrators can use the ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ ResourceQuota]'' object, which provides constraints that limit aggregate resource consumption per Namespace.<br />
<br />
We can have the following types of quotas per Namespace:<br />
<br />
* Compute Resource Quota: We can limit the total sum of compute resources (CPU, memory, etc.) that can be requested in a given Namespace.<br />
* Storage Resource Quota: We can limit the total sum of storage resources (PersistentVolumeClaims, requests.storage, etc.) that can be requested.<br />
* Object Count Quota: We can restrict the number of objects of a given type (pods, ConfigMaps, PersistentVolumeClaims, ReplicationControllers, Services, Secrets, etc.).<br />
<br />
==Daemon Sets==<br />
In some cases, like collecting monitoring data from all nodes, or running a storage daemon on all nodes, etc., we need a specific type of Pod running on all nodes at all times. A ''[https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ DaemonSet]'' is the object that allows us to do just that. <br />
<br />
Whenever a node is added to the cluster, a Pod from a given DaemonSet is created on it. When the node dies, the respective Pods are garbage collected. If a DaemonSet is deleted, all Pods it created are deleted as well.<br />
<br />
Example DaemonSet:<br />
<pre><br />
kind: DaemonSet<br />
apiVersion: apps/v1<br />
metadata:<br />
name: pause-ds<br />
spec:<br />
selector:<br />
matchLabels:<br />
quiet: "pod"<br />
template:<br />
metadata:<br />
labels:<br />
quiet: pod<br />
spec:<br />
tolerations:<br />
- key: node-role.kubernetes.io/master<br />
effect: NoSchedule<br />
containers:<br />
- name: pause-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Stateful Sets==<br />
The ''[https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ StatefulSet]'' controller is used for applications which require a unique identity, such as name, network identifications, strict ordering, etc. For example, MySQL cluster, etcd cluster.<br />
<br />
The StatefulSet controller provides identity and guaranteed ordering of deployment and scaling to Pods.<br />
<br />
Note: Before Kubernetes 1.5, the StatefulSet controller was referred to as ''PetSet''.<br />
<br />
==Role Based Access Control (RBAC)==<br />
''[https://kubernetes.io/docs/admin/authorization/rbac/ Role-based access control]'' (RBAC) is an authorization mechanism for managing permissions around Kubernetes resources.<br />
<br />
Using the RBAC API, we define a role which contains a set of additive permissions. Within a Namespace, a role is defined using the Role object. For a cluster-wide role, we need to use the ClusterRole object.<br />
<br />
Once the roles are defined, we can bind them to a user or a set of users using ''RoleBinding'' and ''ClusterRoleBinding''.<br />
<br />
===Using RBAC with minikube===<br />
<br />
* Start up minikube with RBAC support:<br />
$ minikube start --kubernetes-version=v1.9.0 --extra-config=apiserver.Authorization.Mode=RBAC<br />
<br />
* Setup RBAC:<br />
<pre><br />
$ cat rbac-cluster-role-binding.yml<br />
# kubectl create clusterrolebinding add-on-cluster-admin \<br />
# --clusterrole=cluster-admin --serviceaccount=kube-system:default<br />
#<br />
kind: ClusterRoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1alpha1<br />
metadata:<br />
name: kube-system-sa<br />
subjects:<br />
- kind: Group<br />
name: system:sericeaccounts:kube-system<br />
roleRef:<br />
kind: ClusterRole<br />
name: cluster-admin<br />
apiGroup: rbac.authorization.k8s.io<br />
</pre><br />
<br />
<pre><br />
$ cat rbac-setup.yml <br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: viewer<br />
namespace: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: admin<br />
namespace: rbac<br />
</pre><br />
<br />
* Create a Role Binding:<br />
<pre><br />
# kubectl create rolebinding reader-binding \<br />
# --clusterrole=reader \<br />
# --user=serviceaccount:reader \<br />
# --namespace:rbac<br />
#<br />
kind: RoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: rbac<br />
name: reader-binding<br />
roleRef:<br />
apiGroup: rbac.authorization.k8s.io<br />
kind: Role<br />
name: reader<br />
subjects:<br />
- apiGroup: rbac.authorization.k8s.io<br />
kind: ServiceAccount<br />
name: reader<br />
</pre><br />
<br />
* Create a Role:<br />
<pre><br />
$ cat rbac-role.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: default<br />
name: reader<br />
rules:<br />
- apiGroups: [""]<br />
resources: ["*"]<br />
verbs: ["get", "watch", "list"]<br />
</pre><br />
<br />
* Create an RBAC "core reader" Role with specific resources and "verbs" (i.e., the "core reader" role can "get"/"list"/etc. on specific resources (e.g., Pods, Jobs, Deployments, etc.):<br />
<pre><br />
$ cat rbac-role-core-reader.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: core-reader<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods<br />
- configmaps<br />
- secrets<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- jobs<br />
- deployments<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
</pre><br />
<br />
* "Gotchas":<br />
<pre><br />
$ cat rbac-gotcha-1.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-1<br />
rules:<br />
- nonResourceURLs:<br />
- /healthz<br />
verbs:<br />
- get<br />
- post<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- deployments<br />
verbs:<br />
- "*"<br />
</pre><br />
<pre><br />
$ cat rbac-gotcha-2.yml <br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-2<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- secrets<br />
verbs:<br />
- "*"<br />
resourceNames:<br />
- "my_secret"<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods/logs<br />
verbs:<br />
- "get"<br />
</pre><br />
<br />
; Privilege escalation<br />
* You cannot create a Role or ClusterRole that grants permissions you do not have.<br />
* You cannot create a RoleBinding or ClusterRoleBinding that binds to a Role with permissions you do not have (unless you have been explicitly given "bind" permission on the role).<br />
<br />
* Grant explicit bind access:<br />
<pre><br />
kind: ClusterRole<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: role-grantor<br />
rules:<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["rolebindings"]<br />
verbs: ["create"]<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["clusterroles"]<br />
verbs: ["bind"]<br />
resourceNames: ["admin", "edit", "view"]<br />
</pre><br />
<br />
===Testing RBAC permissions===<br />
<br />
* Example of RBAC not allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
no - Required "container.pods.create" permission.<br />
</pre><br />
<br />
* Example of RBAC allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
yes<br />
</pre><br />
<br />
* A more complex example:<br />
<pre><br />
$ kubectl auth can-i update deployments.apps \<br />
--subresource="scale" --as-group="$group" --as="$user" -n $ns<br />
</pre><br />
<br />
==Federation==<br />
With the ''[https://kubernetes.io/docs/concepts/cluster-administration/federation/ Kubernetes Cluster Federation]'' we can manage multiple Kubernetes clusters from a single control plane. We can sync resources across the clusters, and have cross cluster discovery. This allows us to do Deployments across regions and access them using a global DNS record.<br />
<br />
Federation is very useful when we want to build a hybrid solution, in which we can have one cluster running inside our private datacenter and another one on the public cloud. We can also assign weights for each cluster in the Federation, to distribute the load as per our choice.<br />
<br />
==Helm==<br />
To deploy an application, we use different Kubernetes manifests, such as Deployments, Services, Volume Claims, Ingress, etc. Sometimes, it can be tiresome to deploy them one by one. We can bundle all those manifests after templatizing them into a well-defined format, along with other metadata. Such a bundle is referred to as ''Chart''. These Charts can then be served via repositories, such as those that we have for rpm and deb packages. <br />
<br />
''[https://github.com/kubernetes/helm Helm]'' is a package manager (analogous to yum and apt) for Kubernetes, which can install/update/delete those Charts in the Kubernetes cluster.<br />
<br />
Helm has two components:<br />
<br />
* A client called helm, which runs on your user's workstation; and<br />
* A server called tiller, which runs inside your Kubernetes cluster.<br />
<br />
The client helm connects to the server tiller to manage Charts. Charts submitted for Kubernetes are available [https://github.com/kubernetes/charts here].<br />
<br />
==Monitoring and logging==<br />
In Kubernetes, we have to collect resource usage data by Pods, Services, nodes, etc, to understand the overall resource consumption and to take decisions for scaling a given application. Two popular Kubernetes monitoring solutions are Heapster and Prometheus.<br />
<br />
[https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/ Heapster] is a cluster-wide aggregator of monitoring and event data, which is natively supported on Kubernetes. <br />
<br />
[https://prometheus.io/ Prometheus], now part of [https://www.cncf.io/ CNCF] (Cloud Native Computing Foundation), can also be used to scrape the resource usage from different Kubernetes components and objects. Using its client libraries, we can also instrument the code of our application.<br />
<br />
Another important aspect for troubleshooting and debugging is Logging, in which we collect the logs from different components of a given system. In Kubernetes, we can collect logs from different cluster components, objects, nodes, etc. The most common way to collect the logs is using [https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/ Elasticsearch], which uses [https://www.fluentd.org/ fluentd] with custom configuration as an agent on the nodes. fluentd is an open source data collector, which is also part of CNCF.<br />
<br />
[https://github.com/google/cadvisor cAdvisor] is an open source container resource usage and performance analysis agent. It auto-discovers all containers on a node and collects CPU, memory, file system, and network usage statistics. It provides overall machine usage by analyzing the "root" container on the machine. It exposes a simple UI for local containers on port 4194.<br />
<br />
==Security==<br />
===Configure network policies===<br />
A ''[https://kubernetes.io/docs/concepts/services-networking/network-policies/ Network Policy]'' is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.<br />
<br />
''NetworkPolicy'' resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.<br />
<br />
* Specification of how groups of pods may communicate<br />
* Use labels to select pods and define rules<br />
* Implemented by the network plugin<br />
* Pods are non-isolated by default<br />
* Pods are isolated when a Network Policy selects them<br />
<br />
;Example NetworkPolicy<br />
Create a "default" isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods:<br />
<pre><br />
apiVersion: networking.k8s.io/v1<br />
kind: NetworkPolicy<br />
metadata:<br />
name: default-deny<br />
spec:<br />
podSelector: {}<br />
policyTypes:<br />
- Ingress<br />
</pre><br />
<br />
===TLS certificates for cluster components===<br />
Get [https://github.com/OpenVPN/easy-rsa easy-rsa].<br />
<br />
$ ./easyrsa init-pki<br />
$ MASTER_IP=10.100.1.2<br />
$ ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass<br />
<br />
$ cat rsa-request.sh<br />
<pre><br />
#!/bin/bash<br />
./easyrsa --subject-alt-name="IP:${MASTER_IP}," \<br />
"DNS:kubernetes," \<br />
"DNS:kubernetes.default," \<br />
"DNS:kubernetes.default.svc," \<br />
"DNS:kubernetes.default.svc.cluster," \<br />
"DNS:kubernetes.default.svc.cluster.local" \<br />
--days=10000 \<br />
build-server-full server nopass<br />
</pre><br />
<br />
<pre><br />
pki/<br />
├── ca.crt<br />
├── certs_by_serial<br />
│ └── F3A6F7D34BC84330E7375FA20C8441DF.pem<br />
├── index.txt<br />
├── index.txt.attr<br />
├── index.txt.old<br />
├── issued<br />
│ └── server.crt<br />
├── private<br />
│ ├── ca.key<br />
│ └── server.key<br />
├── reqs<br />
│ └── server.req<br />
├── serial<br />
└── serial.old<br />
</pre><br />
<br />
* Figure out what are the paths of the old TLS certs/keys with the following command:<br />
<pre><br />
$ ps aux | grep [a]piserver | sed -n -e 's/^.*\(kube-apiserver \)/\1/p' | tr ' ' '\n'<br />
kube-apiserver<br />
--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota<br />
--requestheader-extra-headers-prefix=X-Remote-Extra-<br />
--advertise-address=172.31.118.138<br />
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt<br />
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt<br />
--requestheader-username-headers=X-Remote-User<br />
--service-cluster-ip-range=10.96.0.0/12<br />
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key<br />
--secure-port=6443<br />
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key<br />
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname<br />
--requestheader-group-headers=X-Remote-Group<br />
--requestheader-allowed-names=front-proxy-client<br />
--service-account-key-file=/etc/kubernetes/pki/sa.pub<br />
--insecure-port=0<br />
--enable-bootstrap-token-auth=true<br />
--allow-privileged=true<br />
--client-ca-file=/etc/kubernetes/pki/ca.crt<br />
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt<br />
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key<br />
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt<br />
--authorization-mode=Node,RBAC<br />
--etcd-servers=http://127.0.0.1:2379<br />
</pre><br />
<br />
===Security Contexts===<br />
A ''[https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Security Context]'' defines privilege and access control settings for a Pod or Container. Security context settings include:<br />
<br />
* Discretionary Access Control: Permission to access an object, like a file, is based on user ID (UID) and group ID (GID).<br />
* Security Enhanced Linux (SELinux): Objects are assigned security labels.<br />
* Running as privileged or unprivileged.<br />
* Linux Capabilities: Give a process some privileges, but not all the privileges of the root user.<br />
* AppArmor: Use program profiles to restrict the capabilities of individual programs.<br />
* Seccomp: Limit a process's access to open file descriptors.<br />
* AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This boolean directly controls whether the <code>no_new_privs</code> flag gets set on the container process. <code>AllowPrivilegeEscalation</code> is true always when the container is: 1) run as Privileged; or 2) has <code>CAP_SYS_ADMIN</code>.<br />
<br />
; Example #1<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: security-context-demo<br />
spec:<br />
securityContext:<br />
runAsUser: 1000<br />
fsGroup: 2000<br />
volumes:<br />
- name: sec-ctx-vol<br />
emptyDir: {}<br />
containers:<br />
- name: sec-ctx-demo<br />
image: gcr.io/google-samples/node-hello:1.0<br />
volumeMounts:<br />
- name: sec-ctx-vol<br />
mountPath: /data/demo<br />
securityContext:<br />
allowPrivilegeEscalation: false<br />
</pre><br />
<br />
==Taints and tolerations==<br />
[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature Node affinity] is a property of pods that ''attracts'' them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite – they allow a node to ''repel'' a set of pods.<br />
<br />
[https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ Taints and tolerations] work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks the node such that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.<br />
<br />
==Remove a node from a cluster==<br />
<br />
* On the k8s Master Node:<br />
k8s-master> $ kubectl drain k8s-worker-02 --ignore-daemonsets<br />
<br />
* On the k8s Worker Node (the one you wish to remove from the cluster):<br />
k8s-worker-02> $ kubeadm reset<br />
[preflight] Running pre-flight checks.<br />
[reset] Stopping the kubelet service.<br />
[reset] Unmounting mounted directories in "/var/lib/kubelet"<br />
[reset] Removing kubernetes-managed containers.<br />
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.<br />
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]<br />
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]<br />
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]<br />
<br />
==Networking==<br />
<br />
; Useful network ranges<br />
* Choose ranges for the Pods and Service CIDR blocks<br />
* Generally, any of the RFC-1918 ranges work well<br />
** 10.0.0.0/8<br />
** 172.0.0.0/11<br />
** 192.168.0.0/16<br />
<br />
Every Pod can communicate directly with every other Pod<br />
<br />
;K8s Node<br />
* A general purpose compute that has at least one interface<br />
** The host OS will have a real-world IP for accessing the machine<br />
** K8s Pods are given ''virtual'' interfaces connected to an internal<br />
** Each nodes has a running network stack<br />
* Kube-proxy runs in the OS to control IPtables for:<br />
** Services<br />
** NodePorts<br />
<br />
;Networking substrate<br />
* Most k8s network stacks allocate subnets for each node<br />
** The network stack is responsible for arbitration of subnets and IPs<br />
** The network stack is also responsible for moving packets around the network<br />
* Pods have a unique, routable IP on the Pod CIDR block<br />
** The CIDR block is ''not'' accessed from outside the k8s cluster<br />
** The magic of IPtables allows the Pods to make outgoing connections<br />
* Ensure that k8s has the correct Pods and Service CIDR blocks<br />
<br />
The Pod network is not seen on the physical network (i.e., it is encapsulated; you will not be able to use <code>tcpdump</code> on it from the physical network)<br />
<br />
;Making the setup easier &mdash; CNI<br />
* Use the Container Network Interface (CNI)<br />
* Relieves k8s from having to have a specific network configuration<br />
* It is activated by supplying <code>--network-plugin=cni, --cni-conf-dir, --cni-bin-dir</code> to kubelet<br />
** Typical configuration directory: <code>/etc/cni/net.d</code><br />
** Typical bin directory: <code>/opt/cni/bin</code><br />
* Allows for multiple backends to be used: linux-bridge, macvlan, ipvlan, Open vSwitch, network stacks<br />
<br />
;Kubernetes services<br />
<br />
* Services are crucial for service discovery and distributing traffic to Pods<br />
* Services act as simple internal load balancers with VIPs<br />
** No access controls<br />
** No traffic controls<br />
* IPtables magically route to virtual IPs<br />
* Internally, Services are used as inter-Pod service discovery<br />
** Kube-DNS publishes DNS record (i.e., <code>nginx.default.svc.cluster.local</code>)<br />
* Services can be exposed in three different ways:<br />
*# ClusterIP<br />
*# LoadBalancer<br />
*# NodePort<br />
<br />
; kube-proxy<br />
* Each k8s node in the cluster runs a kube-proxy<br />
* Two modes: userspace and iptables<br />
** iptables is much more performant (userspace should no longer be used<br />
* kube-proxy has the task of configuring iptables to expose each k8s service<br />
** iptables rules distributes traffic randomly across the endpoints<br />
<br />
===Network providers===<br />
<br />
In order for a CNI plugin to be considered a "[https://kubernetes.io/docs/concepts/cluster-administration/networking/ Network Provider]", it must provide (at the very least) the following:<br />
# All containers can communicate with all other containers without NAT<br />
# All nodes can communicate with all containers (and ''vice versa'') without NAT<br />
# The IP that a containers sees itself as is the same IP that others see it as<br />
<br />
==Linux namespaces==<br />
<br />
* Control group (cgroups)<br />
* Union File Systems<br />
<br />
==Kubernetes inbound node port requirements==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Protocol<br />
!Direction<br />
!Port range<br />
!Purpose<br />
!Used by<br />
!Notes<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Master node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 6443<sup>*</sup> || Kubernetes API server || All<br />
|-<br />
| TCP || Inbound || 2379-2380 || etcd server client API || kube-apiserver, etcd<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10251 || kube-scheduler || Self<br />
|-<br />
| TCP || Inbound || 10252 || kube-controller-manager || Self<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Worker node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 30000-32767 || NodePort Services<sup>**</sup> || All<br />
|}<br />
</div><br />
<br clear="all"/><br />
<sup>**</sup> Default port range for NodePort Services.<br />
<br />
Any port numbers marked with <sup>*</sup> are overridable, so you will need to ensure any custom ports you provide are also open.<br />
<br />
Although etcd ports are included in master nodes, you can also host your own etcd cluster externally or on custom ports.<br />
<br />
The pod network plugin you use (see below) may also require certain ports to be open. Since this differs with each pod network plugin, please see the documentation for the plugins about what port(s) those need.<br />
<br />
==API versions==<br />
<br />
Below is a table showing which value to use for the <code>apiVersion</code> key for a given k8s primitive (note: all values are for k8s 1.8.0, unless otherwise specified):<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Primitive<br />
!apiVersion<br />
|-<br />
| Pod || v1<br />
|-<br />
| Deployment || apps/v1beta2<br />
|-<br />
| Service || v1<br />
|-<br />
| Job || batch/v1<br />
|-<br />
| Ingress || extensions/v1beta1<br />
|-<br />
| CronJob || batch/v1beta1<br />
|-<br />
| ConfigMap || v1<br />
|-<br />
| DaemonSet || apps/v1<br />
|-<br />
| ReplicaSet || apps/v1beta2<br />
|-<br />
| NetworkPolicy || networking.k8s.io/v1<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
You can get a list of all of the API versions supported by your k8s install with:<br />
$ kubectl api-versions<br />
<br />
==Troubleshooting==<br />
<br />
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns<br />
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
* If your container has previously crashed, you can access the previous container’s crash log with:<br />
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}<br />
<br />
==Miscellaneous commands==<br />
<br />
* Simple workflow (not a best practice; use manifest files {YAML} instead):<br />
$ kubectl run nginx --image=nginx:1.10.0<br />
$ kubectl expose deployment nginx --port 80 --type LoadBalancer<br />
$ kubectl get services # <- wait until public IP is assigned<br />
$ kubectl scale deployment nginx --replicas 3<br />
<br />
* Create an Nginx deployment with three replicas without using YAML:<br />
$ kubectl run nginx --image=nginx --replicas=3<br />
<br />
* Take a node out of service for maintenance:<br />
$ kubectl cordon k8s.worker1.local<br />
$ kubectl drain k8s.worker1.local --ignore-daemonsets<br />
<br />
* Return a given node to a service after cordoning and "draining" it (e.g., after a maintenance):<br />
$ kubectl uncordon k8s.worker1.local<br />
<br />
* Get a list of nodes in a format useful for scripting:<br />
$ kubectl get nodes -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get nodes -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get nodes -o json | jq -crM '.items[].metadata.name'<br />
#~OR~ (if using an older version of `jq`)<br />
$ kubectl get nodes -o json | jq '.items[].metadata.name' | tr -d '"'<br />
<br />
* Label a list of nodes:<br />
<pre><br />
for node in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do<br />
kubectl label nodes ${node} instancetype=ondemand;<br />
kubectl label nodes ${node} "example.io/node-lifecycle"=od;<br />
done<br />
</pre><br />
<br />
* Delete a bunch of Pods in "Evicted" state:<br />
$ kubectl get pod -n develop | awk '/Evicted/{print $1}' | xargs kubectl delete pod -n develop<br />
#~OR~<br />
$ kubectl get po -a --all-namespaces -o json | \<br />
jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | <br />
"kubectl delete po \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c<br />
<br />
* Get a random node:<br />
$ NODES=($(kubectl get nodes -o json | jq -crM '.items[].metadata.name'))<br />
$ NUMNODES=${#NODES[@]}<br />
$ echo ${NODES[$[ $RANDOM % $NUMNODES ]]}<br />
<br />
* Get all recent events sorted by their timestamps:<br />
$ kubectl get events --sort-by='.metadata.creationTimestamp'<br />
<br />
* Get a list of all Pods in the default namespace sorted by Node:<br />
$ kubectl get po -o wide --sort-by=.spec.nodeName<br />
<br />
* Get the cluster IP for a service named "foo":<br />
$ kubectl get svc/foo -o jsonpath='{.spec.clusterIP}'<br />
<br />
* List all Services in a cluster and their node ports:<br />
$ kubectl get --all-namespaces svc -o json |\<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Print just the Pod names of those Pods with the label <code>app=nginx</code>:<br />
$ kubectl get --no-headers=true pods -l app=nginx -o custom-columns=:metadata.name<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get --no-headers=true pods -l app=nginx -o name | awk -F "/" '{print $2}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o json | jq -crM '.items [] | .metadata.name'<br />
<br />
* Get a list of all container images used by the Pods in your default namespace:<br />
$ kubectl get pods -o go-template --template='<nowiki>{{range .items}}{{racontainers}}{{.image}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get pods -o go-template="<nowiki>{{range .items}}{{range .spec.containers}}{{.image}}|{{end}}{{end}}</nowiki>" | tr '|' '\n'<br />
<br />
* Get a list of Pods sorted by Node name:<br />
$ kubectl get po -o json | jq -r '.items | sort_by(.spec.nodeName)[] | [.spec.nodeName,.metadata.name] | @tsv'<br />
<br />
* List all Services in a cluster with their endpoints:<br />
$ kubectl get --all-namespaces svc -o json | \<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Get status transitions of each Pod in the default namespace:<br />
$ export tpl='{range .items[*]}{"\n"}{@.metadata.name}{range @.status.conditions[*]}{"\t"}{@.type}={@.status}{end}{end}'<br />
$ kubectl get po -o jsonpath="${tpl}" && echo<br />
<br />
cheddar-cheese-d6d6587c7-4bgcz Initialized=True Ready=True PodScheduled=True<br />
echoserver-55f97d5bff-pdv65 Initialized=True Ready=True PodScheduled=True<br />
stilton-cheese-6d64cbc79-g7h4w Initialized=True Ready=True PodScheduled=True<br />
<br />
* Get a list of all Pods in status "Failed":<br />
$ kubectl get pods -o go-template='<nowiki>{{range .items}}{{if eq .status.phase "Failed"}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get all users in all namespaces:<br />
$ kubectl get rolebindings --all-namepsaces -o go-template \<br />
--template='<nowiki>{{range .items}}{{println}}{{.metadata.namespace}}={{range .subjects}}{{if eq .kind "User"}}{{.name}} {{end}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get the memory limit assigned to a container in a given Pod:<br />
<pre><br />
$ kubectl get pod example-pod-name -n default \<br />
-o jsonpath="{.spec.containers[*].resources.limits}" <br />
</pre><br />
<br />
* Get a Bash prompt of your current context and namespace:<br />
<pre><br />
NORMAL="\[\033[00m\]"<br />
BLUE="\[\033[01;34m\]"<br />
RED="\[\e[1;31m\]"<br />
YELLOW="\[\e[1;33m\]"<br />
GREEN="\[\e[1;32m\]"<br />
PS1_WORKDIR="\w"<br />
PS1_HOSTNAME="\h"<br />
PS1_USER="\u"<br />
<br />
__kube_ps1()<br />
{<br />
CONTEXT=$(kubectl config current-context)<br />
NAMESPACE=$(kubectl config view -o jsonpath="{.contexts[?(@.name==\"${CONTEXT}\")].context.namespace}")<br />
if [ -z "$NAMESPACE"]; then<br />
NAMESPACE="default"<br />
fi<br />
if [ -n "$CONTEXT" ]; then<br />
case "$CONTEXT" in<br />
*prod*)<br />
echo "${RED}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*test*)<br />
echo "${YELLOW}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*)<br />
echo "${GREEN}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
esac<br />
fi<br />
}<br />
<br />
export PROMPT_COMMAND='PS1="${GREEN}${PS1_USER}@${PS1_HOSTNAME}${NORMAL}:$(__kube_ps1)${BLUE}${PS1_WORKDIR}${NORMAL}\$ "'<br />
</pre><br />
<br />
===Client configuration===<br />
<br />
* Setup autocomplete in bash; bash-completion package should be installed first:<br />
$ source <(kubectl completion bash)<br />
<br />
* View Kubernetes config:<br />
$ kubectl config view<br />
<br />
* View specific config items by JSON path:<br />
$ kubectl config view -o jsonpath='{.users[?(@.name == "k8s")].user.password}'<br />
<br />
* Set credentials for foo.kuberntes.com:<br />
$ kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword<br />
<br />
===Viewing / finding resources===<br />
<br />
* List all services in the namespace:<br />
$ kubectl get services<br />
<br />
* List all pods in all namespaces in wide format:<br />
$ kubectl get pods -o wide --all-namespaces<br />
<br />
* List all pods in JSON (or YAML) format:<br />
$ kubectl get pods -o json<br />
<br />
* Describe resource details (node, pod, svc):<br />
$ kubectl describe nodes my-node<br />
<br />
* List services sorted by name:<br />
$ kubectl get services --sort-by=.metadata.name<br />
<br />
* List pods sorted by restart count:<br />
$ kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'<br />
<br />
* Rolling update pods for frontend-v1:<br />
$ kubectl rolling-update frontend-v1 -f frontend-v2.json<br />
<br />
* Scale a ReplicaSet named "foo" to 3:<br />
$ kubectl scale --replicas=3 rs/foo<br />
<br />
* Scale a resource specified in "foo.yaml" to 3:<br />
$ kubectl scale --replicas=3 -f foo.yaml<br />
<br />
* Execute a command in every pod / replica:<br />
$ for i in 0 1; do kubectl exec foo-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done<br />
<br />
* Get a list of ''all'' container IDs running in ''all'' Pods in ''all'' namespaces for a given Kubernetes cluster:<br />
<pre><br />
$ kubectl get pods --all-namespaces \<br />
-o jsonpath='{range .items[*]}{"pod: "}{.metadata.name}{"\n"}{range .status.containerStatuses[*]}{"\tname: "}{.containerID}{"\n\timage: "}{.image}{"\n"}{end}'<br />
<br />
# Example output:<br />
pod: cert-manager-848f547974-8m2k6<br />
name: containerd://358415173310a528a36ca2c19cdc3319f8fd96634c09957977767333b104d387<br />
image: quay.io/jetstack/cert-manager-controller:v1.5.3<br />
</pre><br />
<br />
===Manage resources===<br />
<br />
* Get documentation for pod or service:<br />
$ kubectl explain pods,svc<br />
<br />
* Create resource(s) like pods, services or DaemonSets:<br />
$ kubectl create -f ./my-manifest.yaml<br />
<br />
* Apply a configuration to a resource:<br />
$ kubectl apply -f ./my-manifest.yaml<br />
<br />
* Start a single instance of Nginx:<br />
$ kubectl run nginx --image=nginx<br />
<br />
* Create a secret with several keys:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysecret<br />
type: Opaque<br />
data:<br />
password: $(echo "s33msi4" | base64)<br />
username: $(echo "jane"| base64)<br />
EOF<br />
</pre><br />
<br />
* Delete a resource:<br />
$ kubectl delete -f ./my-manifest.yaml<br />
<br />
===Monitoring and logging===<br />
<br />
* Deploy Heapster from Github repository:<br />
$ kubectl create -f deploy/kube-config/standalone/<br />
<br />
* Show metrics for nodes:<br />
$ kubectl top node<br />
<br />
* Show metrics for pods:<br />
$ kubectl top pod<br />
<br />
* Show metrics for a given pod and its containers:<br />
$ kubectl top pod pod_name --containers<br />
<br />
* Dump pod logs (STDOUT):<br />
$ kubectl logs pod_name<br />
<br />
* Stream pod container logs (STDOUT, multi-container case):<br />
$ kubectl logs -f pod_name -c my-container<br />
<br />
<!-- TODO: https://gist.github.com/so0k/42313dbb3b547a0f51a547bb968696ba --><br />
<br />
===Run tcpdump on containers running in Pods===<br />
<br />
* Find which node/host/IP the Pod in question is running on and also get the container ID:<br />
<pre><br />
$ kubectl describe pod busybox | grep -E "^Node:|Container ID: "<br />
Node: worker2/10.39.32.122<br />
Container ID: docker://a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03<br />
<br />
#~OR~<br />
<br />
$ containerID=$(kubectl get po busybox -o jsonpath='{.status.containerStatuses[*].containerID}' | sed -e 's|docker://||g')<br />
$ hostIP=$(kubectl get po busybox -o jsonpath='{.status.hostIP}')<br />
</pre><br />
<br />
Log into the node/host running the Pod in question and then perform the following steps.<br />
<br />
* Get the virtual interface ID (note it will depend on which Container Network Interface you are using {e.g., veth, cali, etc.}):<br />
<pre><br />
$ docker exec a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03 /bin/sh -c 'cat /sys/class/net/eth0/iflink'<br />
12<br />
<br />
# List all non-virtual interfaces:<br />
$ for iface in $(find /sys/class/net/ -type l ! -lname '*/devices/virtual/net/*' -printf '%f '); do echo "$iface is not virtual"; done<br />
ens192 is not virtual<br />
<br />
# Check if we are using veth or cali or something else:<br />
$ ls -1 /sys/class/net/ | awk '!/docker|lo|ens/{print substr($0,0,4);exit}'<br />
cali<br />
<br />
$ for i in /sys/class/net/veth*/ifindex; do grep -l 12 $i; done<br />
#~OR~<br />
$ for i in /sys/class/net/cali*/ifindex; do grep -l 12 $i; done<br />
/sys/class/net/cali12d4a061371/ifindex<br />
#~OR~<br />
echo $(find /sys/class/net/ -type l -lname '*/devices/virtual/net/*' -exec grep -l 12 {}/ifindex \;) | awk -F'/' '{print $5}'<br />
cali12d4a061371<br />
#~OR~<br />
$ ip link | grep ^12<br />
12: cali12d4a061371@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default<br />
#~OR~<br />
$ ip link | awk '/^12/{print $2}' | awk -F'@' '{print $1}'<br />
cali12d4a061371<br />
</pre><br />
<br />
* Now run [[tcpdump]] on this virtual interface (note: make sure you are running tcpdump on the ''same'' host as the Pod is running on):<br />
$ sudo tcpdump -i cali12d4a061371<br />
<br />
; Self-signed certificates<br />
<br />
If you are using the latest version of <code>kubectl</code> and are running it against a k8s cluster built with a self-signed cert, you can get around any "x509" errors with:<br />
$ export GODEBUG=x509ignoreCN=0<br />
<br />
===API resources===<br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ time for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
<br />
real 1m20.014s<br />
user 0m52.732s<br />
sys 0m17.751s<br />
</pre><br />
<br />
* Note: if you just want a version for a single/given kind:<br />
<pre><br />
$ kubectl explain deploy | head -2<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
</pre><br />
<br />
===kubectl-neat===<br />
<br />
: See: https://github.com/itaysk/kubectl-neat<br />
: See: [[jq]]<br />
<br />
* To easily copy a certificate secret from one namespace to another namespace run:<br />
<pre><br />
$ SOURCE_NAMESPACE=<update-me><br />
$ DESTINATION_NAMESPACE=<update-me><br />
$ kubectl -n ${SOURCE_NAMESPACE} get secret kafka-client-credentials -o json |\<br />
kubectl neat |\<br />
jq 'del(.metadata["namespace"])' |\<br />
kubectl apply -n ${DESTINATION_NAMESPACE} -f -<br />
</pre><br />
<br />
===Get CPU/memory for each node===<br />
<br />
<pre><br />
for node in $(kubectl get nodes -o=jsonpath='{.items[*].metadata.name}'); do<br />
echo "NODE: ${node}"; kubectl describe node ${node} | grep -E '^ cpu |^ memory ';<br />
done<br />
</pre><br />
<br />
===Get vCPU capacity===<br />
<br />
<pre><br />
$ kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"} \<br />
{.status.capacity.cpu}{\"\n\"}{end}"<br />
</pre><br />
<br />
==Miscellaneous examples==<br />
<br />
* Create a Namespace:<br />
<pre><br />
kind: Namespace<br />
apiVersion: v1<br />
metadata:<br />
name: my-namespace<br />
</pre><br />
<br />
; Testing the load balancing capabilities of a Service<br />
<br />
* Create a Deployment with two replicas of Nginx (i.e., 2 x Pods with identical containers, configuration, etc.):<br />
<pre><br />
$ cat << EOF >nginx-deploy.yml<br />
kind: Deployment<br />
apiVersion: apps/v1<br />
metadata:<br />
name: nginx-deploy<br />
spec:<br />
replicas: 2<br />
strategy:<br />
rollingUpdate:<br />
maxSurge: 1<br />
maxUnavailable: 0<br />
type: RollingUpdate<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
$ kubectl create --validate -f nginx-deploy.yml<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deploy 2 2 2 2 1h<br />
$ kubectl get po<br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deploy-8d68fb6cc-bspt8 1/1 Running 1 1h<br />
nginx-deploy-8d68fb6cc-qdvhg 1/1 Running 1 1h<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: nginx-svc<br />
spec:<br />
ports:<br />
- port: 8080<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
<br />
$ kubectl get svc/nginx-svc<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
nginx-svc ClusterIP 10.101.133.100 <none> 8080/TCP 1h<br />
</pre><br />
<br />
* Overwrite the default index.html file (note: This is ''not'' persistent. The original default index.html file will be restored if the Pod fails and the Deployment brings up a new Pod and/or if you modify your Deployment {e.g., upgrade Nginx}. This is just for demonstration purposes):<br />
$ kubectl exec -it nginx-8d68fb6cc-bspt8 -- sh -c 'echo "pod-01" > /usr/share/nginx/html/index.html'<br />
$ kubectl exec -it nginx-8d68fb6cc-qdvhg -- sh -c 'echo "pod-02" > /usr/share/nginx/html/index.html'<br />
<br />
* Get the HTTP status code and server value from the header of a request to the Service endpoint:<br />
$ curl -Is 10.101.133.100:8080 | grep -E '^HTTP|Server'<br />
HTTP/1.1 200 OK<br />
Server: nginx/1.7.9 # <- This is the version of Nginx we defined in the Deployment above<br />
<br />
* Perform a GET request on the Service endpoint (ClusterIP+Port):<br />
<pre><br />
$ for i in $(seq 1 10); do curl -s 10.101.133.100:8080; done<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-02<br />
</pre><br />
Sometimes <code>pod-01</code> responded; sometimes <code>pod-02</code> responded.<br />
<br />
* Perform a GET on the Service endpoint 10,000 times and sum up which Pod responded for each request:<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
5018 pod-01 # <- number of times pod-01 responded to the request<br />
4982 pod-02 # <- number of times pod-02 responded to the request<br />
<br />
real 1m0.639s<br />
user 0m29.808s<br />
sys 0m11.692s<br />
</pre><br />
<br />
$ awk 'BEGIN{print 5018/(5018+4982);}'<br />
0.5018<br />
$ awk 'BEGIN{print 4982/(5018+4982);}'<br />
0.4982<br />
<br />
So, our Service is "load balancing" our two Nginx Pods in a roughly 50/50 fashion.<br />
<br />
In order to double-check that the Service is randomly selecting a Pod to serve the GET request, let's scale our Deployment from 2 to 3 replicas:<br />
$ kubectl scale deploy/nginx-deploy --replicas=3<br />
<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
3392 pod-01<br />
3335 pod-02<br />
3273 pod-03<br />
<br />
real 0m59.537s<br />
user 0m25.932s<br />
sys 0m9.656s<br />
</pre><br />
$ awk 'BEGIN{print 3392/(3392+3335+3273);}'<br />
0.3392<br />
$ awk 'BEGIN{print 3335/(3392+3335+3273);}'<br />
0.3335<br />
$ awk 'BEGIN{print 3273/(3392+3335+3273);}'<br />
0.3273<br />
<br />
Sure enough. Each of the 3 Pods is serving the GET request roughly 33% of the time.<br />
<br />
; Query selections<br />
<br />
* Create a "query selection" file:<br />
<pre><br />
$ cat << EOF >cluster-nodes-health.txt<br />
Name Kernel InternalIP MemoryPressure DiskPressure PIDPressure Ready<br />
.metadata.name .status.nodeInfo.kernelVersion .status.addresses[0].address .status.conditions[0].status .status.conditions[1].status .status.conditions[2].status .status.conditions[3].status<br />
EOF<br />
</pre><br />
<br />
* Use the above "query selection" file:<br />
<pre><br />
$ kubectl get nodes -o custom-columns-file=cluster-nodes-health.txt<br />
Name Kernel InternalIP MemoryPressure DiskPressure PIDPressure Ready<br />
10.10.10.152 5.4.0-1084-aws 10.10.10.152 False False False False<br />
10.10.11.12 5.4.0-1092-aws 10.10.11.12 False False False False<br />
10.10.12.22 5.4.0-1039-aws 10.10.12.22 False False False False<br />
</pre><br />
<br />
==Example YAML files==<br />
<br />
* Basic Pod using busybox:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod using busybox, which also prints out environment variables (including the ones defined in the YAML):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: env-dump<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- env<br />
env:<br />
- name: USERNAME<br />
value: "Christoph"<br />
- name: PASSWORD<br />
value: "mypassword"<br />
</pre><br />
$ kubectl logs env-dump<br />
...<br />
PASSWORD=mypassword<br />
USERNAME=Christoph<br />
...<br />
<br />
* Basic Pod using alpine:<br />
<pre><br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: alpine<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: alpine<br />
image: alpine<br />
command:<br />
- /bin/sh<br />
- "-c"<br />
- "sleep 60m"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod running Nginx:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx-pod<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create a Job that calculates pi up to 2000 decimal places:<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
<br />
* Create a Deployment with two replicas of Nginx running:<br />
<pre><br />
apiVersion: apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
spec:<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
replicas: 2 <br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.9.1<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Create a basic Persistent Volume, which uses NFS:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 172.31.119.58<br />
readOnly: false<br />
</pre><br />
<br />
* Create a Persistent Volume Claim against the above PV:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
</pre><br />
<br />
* Create a Pod using a customer scheduler (i.e., not the default one):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: my-custom-scheduler<br />
annotations:<br />
scheduledBy: custom-scheduler<br />
spec:<br />
schedulerName: custom-scheduler<br />
containers:<br />
- name: pod-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Install k8s cluster manually in the Cloud==<br />
<br />
''Note: For this example, I will be using AWS and I will assume you already have 3 x EC2 instances running CentOS 7 in your AWS account. I will install Kubernetes 1.10.x.''<br />
<br />
* Disable services not supported (yet) by Kubernetes:<br />
$ sudo setenforce 0 # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
<br />
$ sudo systemctl stop firewalld<br />
$ sudo systemctl mask firewalld<br />
$ sudo yum install -y iptables-services<br />
<br />
* Disable swap:<br />
$ sudo swapoff -a # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo vi /etc/fstab # comment out swap line<br />
$ sudo mount -a<br />
<br />
* Make sure routed traffic does not bypass iptables:<br />
$ cat << EOF > /etc/sysctl.d/k8s.conf<br />
net.bridge.bridge-nf-call-ip6tables = 1<br />
net.bridge.bridge-nf-call-iptables = 1<br />
EOF<br />
$ sudo sysctl --system<br />
<br />
* Install <code>kubelet</code>, <code>kubeadm</code>, and <code>kubectl</code> on '''''all''''' nodes in your cluster (both Master and Worker nodes):<br />
<pre><br />
$ cat << EOF > /etc/yum.repos.d/kubernetes.repo<br />
[kubernetes]<br />
name=Kubernetes<br />
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch<br />
enabled=1<br />
gpgcheck=1<br />
repo_gpgcheck=1<br />
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg<br />
EOF<br />
</pre><br />
<br />
$ sudo yum install -y kubelet kubeadm kubectl<br />
$ sudo systemctl enable kubelet && sudo systemctl start kubelet<br />
<br />
* Configure cgroup driver used by kubelet on '''''all''''' nodes (both Master and Worker nodes):<br />
<br />
Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config:<br />
<br />
$ docker info | grep -i cgroup<br />
$ grep -i cgroup /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
If the Docker cgroup driver and the kubelet config do not match, change the kubelet config to match the Docker cgroup driver. The flag you need to change is <code>--cgroup-driver</code>. If it is already set, you can update like so:<br />
<br />
$ sudo sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
Otherwise, you will need to open the systemd file and add the flag to an existing environment line.<br />
<br />
Then restart kubelet:<br />
<br />
$ sudo systemctl daemon-reload<br />
$ sudo systemctl restart kubelet<br />
<br />
* Run <code>kubeadm</code> on Master node:<br />
<br />
K8s requires a pod network to function. We are going to use Flannel, so we need to pass in a flag to the deployment script so k8s knows how to configure itself:<br />
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16<br />
<br />
Note: This command might take a fair amount of time to complete.<br />
<br />
Once it has completed, make note of the "<code>join</code>" command output by <code>kubeadm init</code> that looks something like the following ('''DO NOT RUN THE FOLLOWING COMMAND YET!'''):<br />
# kubeadm join --token --discovery-token-ca-cert-hash sha256:<br />
<br />
You will run that command on the other non-master nodes (aka the "Worker Nodes") to allow them to join the cluster. However, '''do not''' run that command on the worker nodes until you have completed all of the following steps.<br />
<br />
* Create a directory:<br />
$ mkdir -p $HOME/.kube<br />
<br />
* Copy the configuration files to a location usable by the local user:<br />
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config <br />
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config<br />
<br />
* In order for your pods to communicate with one another, you will need to install pod networking. We are going to use Flannel for our Container Network Interface (CNI) because it is easy to install and reliable. <br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</nowiki><br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml</nowiki><br />
<br />
* Make sure everything is coming up properly:<br />
$ kubectl get pods --all-namespaces --watch<br />
Once the <code>kube-dns-xxxx</code> containers are up (i.e., in Status "Running"), your cluster is ready to accept worker nodes.<br />
<br />
* On each of the Worker nodes, run the <code>sudo kubeadm join ...</code> command that <code>kubeadm init</code> created for you (see above).<br />
<br />
* On the Master Node, run the following command:<br />
$ kubectl get nodes --watch<br />
Once the Status of the Worker Nodes returns "Ready", your k8s cluster is ready to use.<br />
<br />
* Example output of successful Kubernetes cluster:<br />
<pre><br />
$ kubectl get nodes<br />
NAME STATUS ROLES AGE VERSION<br />
k8s-01 Ready master 13m v1.10.1<br />
k8s-02 Ready <none> 12m v1.10.1<br />
k8s-03 Ready <none> 12m v1.10.1<br />
</pre><br />
<br />
That's it! You are now ready to start deploying Pods, Deployments, Services, etc. in your Kubernetes cluster!<br />
<br />
==Bash completion==<br />
''Note: The following only works on newer versions. I have tested that this works on version 1.9.1.''<br />
<br />
Add the following line to your <code>~/.bashrc</code> file:<br />
source <(kubectl completion bash)<br />
<br />
==Kubectl plugins==<br />
<br />
SEE: [https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/ Extend kubectl with plugins] for details.<br />
<br />
: FEATURE STATE: Kubernetes v1.11 (alpha)<br />
: FEATURE STATE: Kubernetes v1.15 (stable)<br />
<br />
This section shows you how to install and write extensions for <code>kubectl</code>. Usually called "plugins" or "binary extensions", this feature allows you to extend the default set of commands available in <code>kubectl</code> by adding new sub-commands to perform new tasks and extend the set of features available in the main distribution of <code>kubectl</code>.<br />
<br />
Get code [https://github.com/kubernetes/kubernetes/tree/master/pkg/kubectl/plugins/examples from here].<br />
<br />
<pre><br />
.kube/<br />
└── plugins<br />
└── aging<br />
├── aging.rb<br />
└── plugin.yaml<br />
</pre><br />
<br />
$ chmod 0700 .kube/plugins/aging/aging.rb<br />
<br />
* See options:<br />
<pre><br />
$ kubectl plugin aging --help<br />
Aging shows pods from the current namespace by age.<br />
<br />
Usage:<br />
kubectl plugin aging [flags] [options]<br />
</pre><br />
<br />
* Usage:<br />
<pre><br />
$ kubectl plugin aging<br />
The Magnificent Aging Plugin.<br />
<br />
nginx-deployment-67594d6bf6-5t8m9: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-6kw9j: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-d8dwt: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
</pre><br />
<br />
==Local Kubernetes==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="6" bgcolor="#EFEFEF" | '''Local Kubernetes Comparisons'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Feature<br />
!kind<br />
!k3d<br />
!minikube<br />
!Docker Desktop<br />
!Rancher Desktop<br />
|- <br />
| Free || yes || yes || yes || Personal Small Business* || yes<br />
|--bgcolor="#eeeeee"<br />
| Install || easy || easy || easy || easy || medium (you may encounter odd scenarios)<br />
|-<br />
| Ease of Use || medium || medium || medium || easy || easy<br />
|--bgcolor="#eeeeee"<br />
| Stability || stable || stable || stable || stable || stable<br />
|-<br />
| Cross-platform || yes || yes || yes || yes || yes<br />
|--bgcolor="#eeeeee"<br />
| CI Usage || yes || yes || yes || no || no<br />
|-<br />
| Multiple clusters || yes || yes || yes || no || no<br />
|--bgcolor="#eeeeee"<br />
| Podman support || yes || yes || yes || no || no<br />
|-<br />
| Host volumes mount support || yes || yes || yes (with some performance limitations) || yes || yes (only pre-defined paths)<br />
|--bgcolor="#eeeeee"<br />
| Kubernetes service port-forwarding/mapping || yes || yes || yes || yes || yes<br />
|-<br />
| Pull-through Docker mirror/proxy || yes || yes || no || yes (can reference locally available images) || yes (can reference locally available images)<br />
|--bgcolor="#eeeeee"<br />
| Custom CNI || yes (ex: calico) || yes (ex: flannel) || yes (ex: calico) || no || no<br />
|-<br />
| Features Gates || yes || yes || yes || yes (but not natively; requires hacky setup) || yes (but not natively; requires hacky setup)<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
[https://bmiguel-teixeira.medium.com/local-kubernetes-the-one-above-all-3aedbeb5f3f6 Source]<br />
<br />
==See also==<br />
* [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
* [[Kubernetes/GKE|Google Kubernetes Engine]] (GKE)<br />
* [[Kubernetes/AWS|Kubernetes on AWS]] (EKS)<br />
* [[Kubeless]]<br />
* [[Helm]]<br />
<br />
==External links==<br />
* [http://kubernetes.io/ Official website]<br />
* [https://github.com/kubernetes/kubernetes Kubernetes code] &mdash; via GitHub<br />
===Playgrounds===<br />
* [https://www.katacoda.com/courses/kubernetes/playground Kubernetes Playground]<br />
* [https://labs.play-with-k8s.com Play with k8s]<br />
===Tools===<br />
* [https://github.com/kubernetes/minikube minikube] &mdash; Run Kubernetes locally<br />
* [https://kind.sigs.k8s.io/ kind] &mdash; '''K'''ubernetes '''IN''' '''D'''ocker (local clusters for testing Kubernetes)<br />
* [https://github.com/kubernetes/kops kops] &mdash; Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management<br />
* [https://kubernetes-incubator.github.io/kube-aws kube-aws] &mdash; a command-line tool to create/update/destroy Kubernetes clusters on AWS<br />
* [https://github.com/kubernetes-incubator/kubespray kubespray] &mdash; Deploy a production ready kubernetes cluster<br />
* [https://rook.io/ Rook.io] &mdash; File, Block, and Object Storage Services for your Cloud-Native Environments<br />
===Resources===<br />
* [https://kubernetes.io/docs/getting-started-guides/scratch/ Creating a Custom Cluster from Scratch]<br />
* [https://github.com/kelseyhightower/kubernetes-the-hard-way Kubernetes The Hard Way]<br />
* [http://k8sport.org/ K8sPort]<br />
* [https://k8s.af/ Kubernetes Failure Stories]<br />
<br />
===Training===<br />
* [https://kubernetes.io/training/ Official Kubernetes Training Website]<br />
** Kubernetes and Cloud Native Associate (KCNA)<br />
** Certified Kubernetes Application Developer (CKAD)<br />
** Certified Kubernetes Administrator (CKA)<br />
** Certified Kubernetes Security Specialist (CKS) [note: Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS.]<br />
* [https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals Kubernetes Fundamentals] (LFS258)<br />
** ''[https://www.cncf.io/certification/expert/ Certified Kubernetes Administrator]'' (PKA) certification.<br />
* [https://killer.sh/ CKS / CKA / CKAD Simulator]<br />
* [https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/ 11 Ways (Not) to Get Hacked]<br />
<br />
===Blog posts===<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727 Understanding kubernetes networking: pods] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82 Understanding kubernetes networking: services] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-ingress-1bc341c84078 Understanding kubernetes networking: ingress] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b Kubernetes ConfigMaps and Secrets - Part 1] &mdash; by Sandeep Dinesh, 2017-07-13<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-part-2-3dc37111f0dc Kubernetes ConfigMaps and Secrets - Part 2] &mdash; by Sandeep Dinesh, 2017-08-08<br />
* [https://abhishek-tiwari.com/10-open-source-tools-for-highly-effective-kubernetes-sre-and-ops-teams/ 10 open-source Kubernetes tools for highly effective SRE and Ops Teams]<br />
* [https://www.ianlewis.org/en/tag/kubernetes Series of blog posts about k8s] &mdash; by Ian Lewis<br />
* [https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?] &mdash; by Sandeep Dinesh, 2018-03-11<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:DevOps]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Kubernetes&diff=8284Kubernetes2024-01-12T17:13:10Z<p>Christoph: /* Release history */</p>
<hr />
<div>'''Kubernetes''' (also known by its numeronym '''k8s''') is an open source container cluster manager. Kubernetes' primary goal is to provide a platform for automating deployment, scaling, and operations of application containers across a cluster of hosts. Kubernetes was released by Google on July 2015.<br />
<br />
* Get the latest stable release of k8s with:<br />
$ curl -sSL <nowiki>https://dl.k8s.io/release/stable.txt</nowiki><br />
<br />
==Release history==<br />
<br />
'''NOTE:''' I have been using Kubernetes since release 1.0 back in September 2015.<br />
<br />
NOTE: There is no such thing as Kubernetes Long-Term-Support (LTS). There is a new "minor" release ''roughly'' every 3 months (note: changed to ''roughly'' every 4 months in 2020).<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="3" bgcolor="#EFEFEF" | '''Kubernetes release history'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Release<br />
!Date<br />
!Cadence (days)<br />
|- align="left"<br />
|1.0 || 2015-07-10 ||align="right"|<br />
|--bgcolor="#eeeeee"<br />
|1.1 || 2015-11-09 ||align="right"| 122<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.2.md 1.2] || 2016-03-16 ||align="right"| 128<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.3.md 1.3] || 2016-07-01 ||align="right"| 107<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.4.md 1.4] || 2016-09-26 ||align="right"| 87<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.5.md 1.5] || 2016-12-12 ||align="right"| 77<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.6.md 1.6] || 2017-03-28 ||align="right"| 106<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.7.md 1.7] || 2017-06-30 ||align="right"| 94<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.8.md 1.8] || 2017-09-28 ||align="right"| 90<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.9.md 1.9] || 2017-12-15 ||align="right"| 78<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md 1.10] || 2018-03-26 ||align="right"| 101<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md 1.11] || 2018-06-27 ||align="right"| 93<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.12.md 1.12] || 2018-09-27 ||align="right"| 92<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.13.md 1.13] || 2018-12-03 ||align="right"| 67<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.14.md 1.14] || 2019-03-25 ||align="right"| 112<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md 1.15] || 2019-06-17 ||align="right"| 84<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md 1.16] || 2019-09-18 ||align="right"| 93<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md 1.17] || 2019-12-09 ||align="right"| 82<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md 1.18] || 2020-03-25 ||align="right"| 107<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md 1.19] || 2020-08-26 ||align="right"| 154<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md 1.20] || 2020-12-08 ||align="right"| 104<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md 1.21] || 2021-04-08 ||align="right"| 121<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md 1.22] || 2021-08-04 ||align="right"| 118<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md 1.23] || 2021-12-07 ||align="right"| 125<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md 1.24] || 2022-05-03 ||align="right"| 147<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md 1.25] || 2022-08-23 ||align="right"| 112<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md 1.26] || 2023-01-18 ||align="right"| 148<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md 1.27] || 2023-04-11 ||align="right"| 83<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md 1.28] || 2023-08-15 ||align="right"| 126<br />
|}<br />
</div><br />
<br clear="all"/><br />
See: [https://gravitational.com/blog/kubernetes-release-cycle The full-time job of keeping up with Kubernetes]<br />
<br />
==Providers and installers==<br />
<br />
* Vanilla Kubernetes<br />
* AWS:<br />
** Managed: EKS<br />
** Kops<br />
** Kube-AWS<br />
** Kismatic<br />
** Kubicorn<br />
** Stack Point Cloud<br />
* Google:<br />
** Managed: GKE<br />
** [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
** Stack Point Cloud<br />
** Typhoon<br />
* Azure AKS<br />
* Ubuntu UKS<br />
* VMware PKS<br />
* [[Rancher|Rancher RKE]]<br />
* CoreOS Tectonic<br />
<br />
==Design overview==<br />
Kubernetes is built through the definition of a set of components (building blocks or "primitives") which, when used collectively, provide a method for the deployment, maintenance, and scalability of container-based application clusters.<br />
<br />
These "primitives" are designed to be ''loosely coupled'' (i.e., where little to no knowledge of the other component definitions is needed to use) as well as easily extensible through an API. Both the internal components of Kubernetes as well as the extensions and containers make use of this API.<br />
<br />
==Components==<br />
The building blocks of Kubernetes are the following (note that these are also referred to as Kubernetes "Objects" or "API Primitives"):<br />
<br />
;Cluster : A cluster is a set of machines (physical or virtual) on which your applications are managed and run. All machines are managed as a cluster (or set of clusters, depending on the topology used).<br />
;Nodes (minions) : You can think of these as "container clients". These are the individual hosts (physical or virtual) that Docker is installed on and hosts the various containers within your managed cluster.<br />
: Each node will run etcd (a key-pair management and communication service, used by Kubernetes for exchanging messages and reporting on cluster status) as well as the Kubernetes Proxy.<br />
;Pods : A pod consists of one or more containers. Those containers are guaranteed (by the cluster controller) to be located on the same host machine (aka "co-located") in order to facilitate sharing of resources. For an example, it makes sense to have database processes and data containers as close as possible. In fact, they really should be in the same pod.<br />
: Pods "work together", as in a multi-tiered application configuration. Each set of pods that define and implement a service (e.g., MySQL or Apache) are defined by the label selector (see below).<br />
: Pods are assigned unique IPs within each cluster. These allow an application to use ports without having to worry about conflicting port utilization.<br />
: Pods can contain definitions of disk volumes or shares, and then provide access from those to all the members (containers) within the pod.<br />
: Finally, pod management is done through the API or delegated to a controller.<br />
;Labels : Clients can attach key-value pairs to any object in the system (e.g., Pods or Nodes). These become the labels that identify them in the configuration and management of them. The key-value pairs can be used to filter, organize, and perform mass operations on a set of resources.<br />
;Selectors : Label Selectors represent queries that are made against those labels. They resolve to the corresponding matching objects. A Selector expression matches labels to filter certain resources. For example, you may want to search for all pods that belong to a certain service, or find all containers that have a specific tier Label value as "database". Labels and Selectors are inherently two sides of the same coin. You can use Labels to classify resources and use Selectors to find them and use them for certain actions.<br />
: These two items are the primary way that grouping is done in Kubernetes and determine which components that a given operation applies to when indicated.<br />
;Controllers : These are used in the management of your cluster. Controllers are the mechanism by which your desired configuration state is enforced.<br />
: Controllers manage a set of pods and, depending on the desired configuration state, may engage other controllers to handle replication and scaling (Replication Controller) of X number of containers and pods across the cluster. It is also responsible for replacing any container in a pod that fails (based on the desired state of the cluster).<br />
: Replication Controllers (RC) are a subset of Controllers and are an abstraction used to manage pod lifecycles. One of the key uses of RCs is to maintain a certain number of running Pods (e.g., for scaling or ensuring that at least one Pod is running at all times, etc.). It is considered a "best practice" to use RCs to define Pod lifecycles, rather than creating Pods directly.<br />
: Other controllers that can be engaged include a ''DaemonSet Controller'' (enforces a 1-to-1 ratio of pods to Worker Nodes) and a ''Job Controller'' (that runs pods to "completion", such as in batch jobs).<br />
: Each set of pods any controller manages, is determined by the label selectors that are part of its definition.<br />
;Replica Sets: These define how many replicas of each Pod will be running. They also monitor and ensure the required number of Pods are running, replacing Pods that die. Replica Sets can act as replacements for Replication Controllers.<br />
;Services : A Service is an abstraction on top of Pods, which provides a single IP address and DNS name by which the Pods can be accessed. This load balancing configuration is much easier to manage and helps scale Pods seamlessly.<br />
: Kubernetes can then provide service discovery and handle routing with the static IP for each pod as well as load balancing (round-robin based) connections to that service among the pods that match the label selector indicated.<br />
: By default, although a service is only exposed inside a cluster, it can also be exposed outside a cluster, as needed.<br />
;Volumes : A Volume is a directory with data, which is accessible to a container. The volume co-terminates with the Pods that encloses it.<br />
;Name : A name by which a resource is identified.<br />
;Namespace : A Namespace provides additional qualification to a resource name. This is especially helpful when multiple teams/projects are using the same cluster and there is a potential for name collision. You can think of a Namespace as a virtual wall between multiple clusters.<br />
;Annotations : An Annotation is a Label, but with much larger data capacity. Typically, this data is not readable by humans and is not easy to filter through. Annotation is useful only for storing data that may not be searched, but is required by the resource (e.g., storing strong keys, etc.).<br />
;Control Pane<br />
;API<br />
<br />
===Pods===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/ Pod]'' is the smallest and simplest Kubernetes object. It is the unit of deployment in Kubernetes, which represents a single instance of the application. A Pod is a logical collection of one or more containers, which:<br />
<br />
* are scheduled together on the same host;<br />
* share the same network namespace; and<br />
* mount the same external storage (Volumes).<br />
<br />
Pods are ephemeral in nature, and they do not have the capability to self-heal by themselves. That is why we use them with controllers, which can handle a Pod's replication, fault tolerance, self-heal, etc. Examples of controllers are ''Deployments'', ''ReplicaSets'', ''ReplicationControllers'', etc. We attach the Pod's specification to other objects using Pod Templates (see below).<br />
<br />
===Labels===<br />
Labels are key-value pairs that can be attached to any Kubernetes object (e.g. ''Pods''). Labels are used to organize and select a subset of objects, based on the requirements in place. Many objects can have the same label(s). Labels do not provide uniqueness to objects. <br />
<br />
===Label Selectors===<br />
With Label Selectors, we can select a subset of objects. Kubernetes supports two types of Selectors:<br />
<br />
;Equality-Based Selectors : Equality-Based Selectors allow filtering of objects based on label keys and values. With this type of Selector, we can use the <code>=</code>, <code>==</code>, or <code>!=</code> operators. For example, with <code>env==dev</code>, we are selecting the objects where the "<code>env</code>" label is set to "<code>dev</code>".<br />
;Set-Based Selectors : Set-Based Selectors allow filtering of objects based on a set of values. With this type of Selector, we can use the <code>in</code>, <code>notin</code>, and <code>exist</code> operators. For example, with <code>env in (dev,qa)</code>, we are selecting objects where the "<code>env</code>" label is set to "<code>dev</code>" or "<code>qa</code>".<br />
<br />
===Replication Controllers===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/ ReplicationController]'' (rc) is a controller that is part of the Master Node's Controller Manager. It makes sure the specified number of replicas for a Pod is running at any given point in time. If there are more Pods than the desired count, the ReplicationController would kill the extra Pods, and, if there are less Pods, then the ReplicationController would create more Pods to match the desired count. Generally, we do not deploy a Pod independently, as it would not be able to re-start itself if something goes wrong. We always use controllers like ReplicationController to create and manage Pods.<br />
<br />
===Replica Sets===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ ReplicaSet]'' (rs) is the next-generation ReplicationController. ReplicaSets support both equality- and set-based Selectors, whereas ReplicationControllers only support equality-based Selectors. As of January 2018, this is the only difference.<br />
<br />
As an example, say you create a ReplicaSet where you defined a "desired replicas = 3" (and set "<code>current==desired</code>"), any time "<code>current!=desired</code>" (i.e., one of the Pods dies) the ReplicaSet will detect that the current state is no longer matching the desired state. So, in our given scenario, the ReplicaSet will create one more Pod, thus ensuring that the current state matches the desired state.<br />
<br />
ReplicaSets can be used independently, but they are mostly used by Deployments to orchestrate the Pod creation, deletion, and updates. A Deployment automatically creates the ReplicaSets, and we do not have to worry about managing them.<br />
<br />
===Deployments===<br />
''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' objects provide declarative updates to Pods and ReplicaSets. The DeploymentController is part of the Master Node's Controller Manager, and it makes sure that the current state always matches the desired state.<br />
<br />
As an example, let's say we have a Deployment which creates a "ReplicaSet A". ReplicaSet A then creates 3 Pods. In each Pod, one of the containers uses the <code>nginx:1.7.9</code> image.<br />
<br />
Now, in the Deployment, we change the Pod's template and we update the image for the Nginx container from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>. As we have modified the Pod's template, a new "ReplicaSet B" gets created. This process is referred to as a "Deployment rollout". (A rollout is only triggered when we update the Pod's template for a deployment. Operations like scaling the deployment do not trigger the deployment.) Once ReplicaSet B is ready, the Deployment starts pointing to it.<br />
<br />
On top of ReplicaSets, Deployments provide features like Deployment recording, with which, if something goes wrong, we can rollback to a previously known state.<br />
<br />
===Namespaces===<br />
If we have numerous users whom we would like to organize into teams/projects, we can partition the Kubernetes cluster into sub-clusters using ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces]''. The names of the resources/objects created inside a Namespace are unique, but not across Namespaces.<br />
<br />
To list all the Namespaces, we can run the following command:<br />
$ kubectl get namespaces<br />
NAME STATUS AGE<br />
default Active 2h<br />
kube-public Active 2h<br />
kube-system Active 2h<br />
<br />
Generally, Kubernetes creates two default namespaces: <code>kube-system</code> and <code>default</code>. The <code>kube-system</code> namespace contains the objects created by the Kubernetes system. The <code>default</code> namespace contains the objects which belong to any other namespace. By default, we connect to the <code>default</code> Namespace. <code>kube-public</code> is a special namespace, which is readable by all users and used for special purposes, like bootstrapping a cluster. <br />
<br />
Using ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ Resource Quotas]'', we can divide the cluster resources within Namespaces.<br />
<br />
===Component services===<br />
The component services running on a standard master/worker node(s) Kubernetes setup are as follows:<br />
* Kubernetes Master node(s)<br />
*; kube-apiserver : Exposes Kubernetes APIs<br />
*; kube-controller-manager : Runs controllers to handle nodes, endpoints, etc.<br />
*; kube-scheduler : Watches for new pods and assigns them nodes<br />
*; etcd : Distributed key-value store<br />
*; DNS : [optional] DNS for Kubernetes services<br />
* Worker node(s)<br />
*; kubelet : Manages pods on a node, volumes, secrets, creating new containers, health checks, etc.<br />
*; kube-proxy : Maintains network rules, port forwarding, etc.<br />
<br />
==Setup a Kubernetes cluster==<br />
<br />
<div style="margin: 10px; padding: 5px; border: 2px solid red;">'''IMPORTANT''': The following is how to setup Kubernetes 1.2 that is, as of January 2018, a very old version. I will update this article with how to setup k8s using a much newer version (v1.9) when I have time.<br />
</div><br />
<br />
In this section, I will show you how to setup a Kubernetes cluster with etcd and Docker. The cluster will consist of 1 master node and 3 worker nodes.<br />
<br />
===Setup VMs===<br />
<br />
For this demo, I will be creating 4 VMs via [[Vagrant]] (with VirtualBox).<br />
<br />
* Create Vagrant demo environment:<br />
$ mkdir $HOME/dev/kubernetes && cd $_<br />
<br />
* Create Vagrantfile with the following contents:<br />
<pre><br />
# -*- mode: ruby -*-<br />
# vi: set ft=ruby :<br />
<br />
require 'yaml'<br />
VAGRANTFILE_API_VERSION = "2"<br />
<br />
$common_script = <<COMMON_SCRIPT<br />
# Set verbose<br />
set -v<br />
# Set exit on error<br />
set -e<br />
echo -e "$(date) [INFO] Starting modified Vagrant..."<br />
sudo yum update -y<br />
# Timestamp provision<br />
date > /etc/vagrant_provisioned_at<br />
COMMON_SCRIPT<br />
<br />
unless defined? CONFIG<br />
configuration_file = File.join(File.dirname(__FILE__), 'vagrant_config.yml')<br />
CONFIG = YAML.load(File.open(configuration_file, File::RDONLY).read)<br />
end<br />
<br />
CONFIG['box'] = {} unless CONFIG.key?('box')<br />
<br />
def modifyvm_network(node)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--nicpromisc1", "allow-all"]<br />
#vbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]<br />
vbox.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]<br />
end<br />
end<br />
<br />
def modifyvm_resources(node, memory, cpus)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--memory", memory]<br />
vbox.customize ["modifyvm", :id, "--cpus", cpus]<br />
end<br />
end<br />
<br />
## START: Actual Vagrant process<br />
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|<br />
<br />
config.vm.box = CONFIG['box']['name']<br />
<br />
# Uncomment the following line if you wish to be able to pass files from<br />
# your local filesystem directly into the vagrant VM:<br />
#config.vm.synced_folder "data", "/vagrant"<br />
<br />
## VM: k8s master #############################################################<br />
config.vm.define "master" do |node|<br />
node.vm.hostname = "k8s.master.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
#node.vm.network "forwarded_port", guest: 80, host: 8080<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['master']<br />
<br />
# Uncomment the following if you wish to define CPU/memory:<br />
#node.vm.provider "virtualbox" do |vbox|<br />
# vbox.customize ["modifyvm", :id, "--memory", "4096"]<br />
# vbox.customize ["modifyvm", :id, "--cpus", "2"]<br />
#end<br />
#modifyvm_resources(node, "4096", "2")<br />
end<br />
## VM: k8s minion1 ############################################################<br />
config.vm.define "minion1" do |node|<br />
node.vm.hostname = "k8s.minion1.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion1']<br />
end<br />
## VM: k8s minion2 ############################################################<br />
config.vm.define "minion2" do |node|<br />
node.vm.hostname = "k8s.minion2.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion2']<br />
end<br />
## VM: k8s minion3 ############################################################<br />
config.vm.define "minion3" do |node|<br />
node.vm.hostname = "k8s.minion3.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion3']<br />
end<br />
###############################################################################<br />
<br />
end<br />
</pre><br />
<br />
The above Vagrantfile uses the following configuration file:<br />
$ cat vagrant_config.yml<br />
<pre><br />
---<br />
box:<br />
name: centos/7<br />
storage_controller: 'SATA Controller'<br />
debug: false<br />
development: false<br />
network:<br />
dns1: 8.8.8.8<br />
dns2: 8.8.4.4<br />
internal:<br />
network: 192.168.200.0/24<br />
external:<br />
start: 192.168.100.100<br />
end: 192.168.100.200<br />
network: 192.168.100.0/24<br />
bridge: wlan0<br />
netmask: 255.255.255.0<br />
broadcast: 192.168.100.255<br />
host_groups:<br />
master: 192.168.200.100<br />
minion1: 192.168.200.101<br />
minion2: 192.168.200.102<br />
minion3: 192.168.200.103<br />
</pre><br />
<br />
* In the Vagrant Kubernetes directory (i.e., <code>$HOME/dev/kubernetes</code>), run the following command:<br />
$ vagrant up<br />
<br />
===Setup hosts===<br />
''Note: Run the following commands/steps on all hosts (master and minions).''<br />
<br />
* Log into the k8s master host:<br />
$ vagrant ssh master<br />
<br />
* Kubernetes cluster<br />
$ cat << EOF >> /etc/hosts<br />
192.168.200.100 k8s.master.dev<br />
192.168.200.101 k8s.minion1.dev<br />
192.168.200.102 k8s.minion2.dev<br />
192.168.200.103 k8s.minion3.dev<br />
EOF<br />
<br />
* Install, enable, and start NTP:<br />
$ yum install -y ntp<br />
$ systemctl enable ntpd && systemctl start ntpd<br />
$ timedatectl<br />
<br />
* Disable any [[iptables|firewall rules]] (for now; we will add the rules back later):<br />
$ systemctl stop firewalld && systemctl disable firewalld<br />
$ systemctl stop iptables<br />
<br />
* Disable [[SELinux]] (for now; we will turn it on again later):<br />
$ setenforce 0<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/sysconfig/selinux<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
$ sestatus<br />
<br />
* Add the Docker repo and update yum:<br />
$ cat << EOF > /etc/yum.repos.d/virt7-docker-common-release.repo<br />
[virt7-docker-common-release]<br />
name=virr7-docker-common-release<br />
baseurl=<nowiki>http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/</nowiki><br />
gpgcheck=0<br />
EOF<br />
$ yum update<br />
<br />
* Install Docker, Kubernetes, and etcd:<br />
$ yum install -y --enablerepo=virt7-docker-common-release kubernetes docker etcd<br />
<br />
===Install and configure master controller===<br />
''Note: Run the following commands on only the master host.''<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/etcd/etcd.conf</code> and add (or make changes to) the following lines:<br />
[member]<br />
ETCD_LISTEN_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
[cluster]<br />
ETCD_ADVERTISE_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/apiserver</code> and add (or make changes to) the following lines:<br />
<pre><br />
# The address on the local server to listen to.<br />
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"<br />
KUBE_API_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port on the local server to listen on.<br />
KUBE_API_PORT="--port=8080"<br />
<br />
# Port minions listen on<br />
KUBELET_PORT="--kubelet-port=10250"<br />
<br />
# Comma separated list of nodes in the etcd cluster<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://127.0.0.1:2379</nowiki>"<br />
<br />
# Address range to use for services<br />
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"<br />
<br />
# default admission control policies<br />
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"<br />
<br />
# Add your own!<br />
KUBE_API_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following etcd and Kubernetes services:<br />
<br />
$ for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE <br />
done<br />
<br />
* Check on the status of the above services (the following command should report 4 running services):<br />
$ systemctl status etcd kube-apiserver kube-controller-manager kube-scheduler | grep "(running)" | wc -l # => 4<br />
<br />
* Check on the status of the Kubernetes API server:<br />
$ kubectl cluster-info<br />
Kubernetes master is running at <nowiki>http://localhost:8080</nowiki><br />
$ curl <nowiki>http://localhost:8080/version</nowiki><br />
#~OR~<br />
$ curl <nowiki>http://k8s.master.dev:8080/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "2",<br />
"gitVersion": "v1.2.0",<br />
"gitCommit": "ec7364b6e3b155e78086018aa644057edbe196e5",<br />
"gitTreeState": "clean"<br />
}<br />
</pre><br />
<br />
* Get a list of Kubernetes API paths:<br />
$ curl <nowiki>http://k8s.master.dev:8080/paths</nowiki><br />
<pre><br />
{<br />
"paths": [<br />
"/api",<br />
"/api/v1",<br />
"/apis",<br />
"/apis/autoscaling",<br />
"/apis/autoscaling/v1",<br />
"/apis/batch",<br />
"/apis/batch/v1",<br />
"/apis/extensions",<br />
"/apis/extensions/v1beta1",<br />
"/healthz",<br />
"/healthz/ping",<br />
"/logs/",<br />
"/metrics",<br />
"/resetMetrics",<br />
"/swagger-ui/",<br />
"/swaggerapi/",<br />
"/ui/",<br />
"/version"<br />
]<br />
}<br />
</pre><br />
<br />
* List all available paths (key-value stores) known to ectd:<br />
$ etcdctl ls / --recursive<br />
<br />
The master controller in a Kubernetes cluster must have the following services running to function as the master host in the cluster:<br />
* ntpd<br />
* etcd<br />
* kube-controller-manager<br />
* kube-apiserver<br />
* kube-scheduler<br />
<br />
Note: The Docker daemon should not be running on the master host.<br />
<br />
===Install and configure the minions===<br />
''Note: Run the following commands/steps on all minion hosts.''<br />
<br />
* Log into the k8s minion hosts:<br />
$ vagrant ssh minion1 # do the same for minion2 and minion3<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ECTD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/kubelet</code> and add (or make changes to) the following lines:<br />
<pre><br />
###<br />
# kubernetes kubelet (minion) config<br />
<br />
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)<br />
KUBELET_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port for the info server to serve on<br />
KUBELET_PORT="--port=10250"<br />
<br />
# You may leave this blank to use the actual hostname<br />
KUBELET_HOSTNAME="--hostname-override=k8s.minion1.dev" # ***CHANGE TO CORRECT MINION HOSTNAME***<br />
<br />
# location of the api-server<br />
KUBELET_API_SERVER="--api-servers=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
<br />
# pod infrastructure container<br />
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"<br />
<br />
# Add your own!<br />
KUBELET_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following services:<br />
$ for SERVICE in kube-proxy kubelet docker; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE<br />
done<br />
<br />
* Test that Docker is running and can start containers:<br />
$ docker info<br />
$ docker pull hello-world<br />
$ docker run hello-world<br />
<br />
Each minion in a Kubernetes cluster must have the following services running to function as a member of the cluster (i.e., a "Ready" node):<br />
* ntpd<br />
* kubelet<br />
* kube-proxy<br />
* docker<br />
<br />
===Kubectl: Exploring our environment===<br />
''Note: Run all of the following commands on the master host.''<br />
<br />
* Get a list of nodes with <code>kubectl</code>:<br />
$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 20m<br />
k8s.minion2.dev Ready 12m<br />
k8s.minion3.dev Ready 12m<br />
</pre><br />
<br />
* Describe nodes with <code>kubectl</code>:<br />
<br />
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'<br />
$ kubectl get nodes -o jsonpath='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' | tr ';' "\n"<br />
<pre><br />
k8s.minion1.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion2.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion3.dev:OutOfDisk=False<br />
Ready=True<br />
</pre><br />
<br />
* Get the man page for <code>kubectl</code>:<br />
$ man kubectl-get<br />
<br />
==Working with our Kubernetes cluster==<br />
<br />
''Note: The following section will be working from within the Kubernetes cluster we created above.''<br />
<br />
===Create and deploy pod definitions===<br />
<br />
* Turn off nodes 1 and 2:<br />
minion{1,2}$ systemctl stop kubelet kube-proxy<br />
<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 1h<br />
k8s.minion2.dev NotReady 37m<br />
k8s.minion3.dev NotReady 39m<br />
</pre><br />
<br />
* Check for any k8s Pods (there should be none):<br />
master$ kubectl get pods<br />
<br />
* Create a builds directory for our Pods:<br />
master$ mkdir builds && cd $_<br />
<br />
* Create a Pod running Nginx inside a Docker container:<br />
<pre><br />
master$ kubectl create -f - <<EOF<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
* Check on Pod creation status:<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 0/1 ContainerCreating 0 2s<br />
</pre><br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 1/1 Running 0 3m<br />
</pre><br />
<br />
minion1$ docker ps<br />
<pre><br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
a718c6c0355d nginx:1.7.9 "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx.4580025_nginx_default_699e...<br />
</pre><br />
<br />
master$ kubectl describe pod nginx<br />
<br />
master$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1<br />
busybox$ wget -qO- 172.17.0.2<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete pod nginx<br />
<br />
* Port forwarding:<br />
master$ kubectl create -f nginx.yml # see above for YAML<br />
master$ kubectl port-forward nginx :80 &<br />
I1020 23:12:29.478742 23394 portforward.go:213] Forwarding from [::1]:40065 -> 80<br />
master$ curl -I localhost:40065<br />
<br />
===Tags, labels, and selectors===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-pod-label.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-pod-label.yml<br />
master$ kubectl get pods -l app=nginx<br />
master$ kubectl describe pods -l app=nginx<br />
<br />
* Add labels or overwrite existing ones:<br />
master$ kubectl label pods nginx new-label=mynginx<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=nginx<br />
master$ kubectl label pods nginx new-label=foo<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=foo<br />
<br />
===Deployments===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-prod.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-prod<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-prod<br />
spec:<br />
containers:<br />
- name: nginx-deployment-prod<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create --validate -f nginx-deployment-dev.yml<br />
master$ kubectl create --validate -f nginx-deployment-prod.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 5m<br />
nginx-deployment-prod-3051195443-hj9b1 1/1 Running 0 12m<br />
</pre><br />
<br />
master$ kubectl describe deployments -l app=nginx-deployment-dev<br />
<pre><br />
Name: nginx-deployment-dev<br />
Namespace: default<br />
CreationTimestamp: Thu, 20 Oct 2016 23:48:46 +0000<br />
Labels: app=nginx-deployment-dev<br />
Selector: app=nginx-deployment-dev<br />
Replicas: 1 updated | 1 total | 1 available | 0 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-dev-2568522567 (1/1 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment-prod 1 1 1 1 44s<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev-update.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.8 # ***CHANGED***<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl apply -f nginx-deployment-dev-update.yml<br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 0/1 ContainerCreating 0 27s<br />
</pre><br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 6m<br />
</pre><br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment nginx-deployment-dev<br />
master$ kubectl delete deployment nginx-deployment-prod<br />
<br />
===Multi-Pod (container) replication controller===<br />
<br />
* Start the other two nodes (the ones we previously stopped):<br />
minion2$ systemctl start kubelet kube-proxy<br />
minion3$ systemctl start kubelet kube-proxy<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 2h<br />
k8s.minion2.dev Ready 2h<br />
k8s.minion3.dev Ready 2h<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-multi-node.yml<br />
---<br />
apiVersion: v1<br />
kind: ReplicationController<br />
metadata:<br />
name: nginx-www<br />
spec:<br />
replicas: 3<br />
selector:<br />
app: nginx<br />
template:<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-multi-node.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 0/1 ContainerCreating 0 10s<br />
nginx-www-416ct 0/1 ContainerCreating 0 10s<br />
nginx-www-ax41w 0/1 ContainerCreating 0 10s<br />
</pre><br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 1m<br />
nginx-www-416ct 1/1 Running 0 1m<br />
nginx-www-ax41w 1/1 Running 0 1m<br />
</pre><br />
<br />
master$ kubectl describe pods | awk '/^Node/{print $2}'<br />
<pre><br />
k8s.minion2.dev/192.168.200.102<br />
k8s.minion1.dev/192.168.200.101<br />
k8s.minion3.dev/192.168.200.103<br />
</pre><br />
<br />
minion1$ docker ps # 1 nginx container running<br />
minion2$ docker ps # 1 nginx container running<br />
minion3$ docker ps # 1 nginx container running<br />
minion3$ docker ps --format "<nowiki>{{.Image}}</nowiki>"<br />
<pre><br />
nginx<br />
gcr.io/google_containers/pause:2.0<br />
</pre><br />
<br />
master$ kubectl describe replicationcontroller<br />
<pre><br />
Name: nginx-www<br />
Namespace: default<br />
Image(s): nginx<br />
Selector: app=nginx<br />
Labels: app=nginx<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
...<br />
</pre><br />
<br />
* Attempt to delete one of the three pods:<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 11m<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
master$ kubectl delete pod nginx-www-2evxu<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-3cck4 1/1 Running 0 12s<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
<br />
A new pod (<code>nginx-www-3cck4</code>) automatically started up. This is because the expected state, as defined in our YAML file, is for there to be 3 pods running at all times. Thus, if one or more of the pods were to go down, a new pod (or pods) will automatically start up to bring the state back to the expected state.<br />
<br />
* To force-delete all pods:<br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Create and deploy service definitions===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-service.yml<br />
---<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: nginx-service<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
</pre><br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
</pre><br />
master$ kubectl create -f nginx-service.yml<br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
nginx-service 10.254.110.127 <none> 8000/TCP 10s<br />
</pre><br />
<br />
master$ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i<br />
busybox$ wget -qO- 10.254.110.127:8000 # works<br />
<br />
* Cleanup<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete service nginx-service<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-jh2e9 1/1 Running 0 13m<br />
nginx-www-jir2g 1/1 Running 0 13m<br />
nginx-www-w91uw 1/1 Running 0 13m<br />
</pre><br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Creating temporary Pods at the CLI===<br />
<br />
* Make sure we have no Pods running:<br />
master$ kubectl get pods<br />
<br />
* Create temporary deployment pod:<br />
master$ kubectl run mysample --image=foobar/apache<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
mysample-1424711890-fhtxb 0/1 ContainerCreating 0 1s<br />
</pre><br />
master$ kubectl get deployment <br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
mysample 1 1 1 0 7s<br />
</pre><br />
<br />
* Create a temporary deployment pod (where we know it will fail):<br />
master$ kubectl run myexample --image=christophchamp/ubuntu_sysadmin<br />
master$ kubectl -o wide get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myexample-3534121234-mpr35 0/1 CrashLoopBackOff 12 39m k8s.minion3.dev<br />
mysample-2812764540-74c5h 1/1 Running 0 41m k8s.minion2.dev<br />
</pre><br />
<br />
* Check on why the "myexample" pod is in status "CrashLoopBackOff":<br />
master$ kubectl describe pods/myexample-3534121234-mpr35<br />
master$ kubectl describe deployments/mysample<br />
master$ kubectl describe pods/mysample-2812764540-74c5h | awk '/^Node/{print $2}'<br />
k8s.minion2.dev/192.168.200.102<br />
<br />
master$ kubectl delete deployment mysample<br />
<br />
* Run multiple replicas of the same pod:<br />
master$ kubectl run myreplicas --image=latest123/apache --replicas=2 --labels=app=myapache,version=1.0.0<br />
master$ kubectl describe deployment myreplicas <br />
<pre><br />
Name: myreplicas<br />
Namespace: default<br />
CreationTimestamp: Fri, 21 Oct 2016 19:10:30 +0000<br />
Labels: app=myapache,version=1.0.0<br />
Selector: app=myapache,version=1.0.0<br />
Replicas: 2 updated | 2 total | 1 available | 1 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: myreplicas-2209834598 (2/2 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myreplicas-2209834598-5iyer 1/1 Running 0 1m k8s.minion1.dev<br />
myreplicas-2209834598-cslst 1/1 Running 0 1m k8s.minion2.dev<br />
</pre><br />
<br />
master$ kubectl describe pods -l version=1.0.0<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myreplicas<br />
<br />
===Interacting with Pod containers===<br />
<br />
* Create example Apache pod definition file:<br />
<pre><br />
master$ cat << EOF > apache.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: apache<br />
spec:<br />
containers:<br />
- name: apache<br />
image: latest123/apache<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl create -f apache.yml<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
<br />
* Test pod and make some basic configuration changes:<br />
master$ kubectl exec apache date<br />
master$ kubectl exec mypod -i -t -- cat /var/www/html/index.html # default apache HTML<br />
master$ kubectl exec apache -i -t -- /bin/bash<br />
container$ export TERM=xterm<br />
container$ echo "xtof test" > /var/www/html/index.html<br />
minion3$ curl 172.17.0.2<br />
xtof test<br />
container$ exit<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
Pod/container is still running even after we exited (as expected).<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Logs===<br />
<br />
* Start our example Apache pod to use for checking Kubernetes logging features:<br />
master$ kubectl create -f apache.yml <br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
apache 1/1 Running 0 9s<br />
</pre><br />
master$ kubectl logs apache<br />
<pre><br />
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message<br />
</pre><br />
master$ kubectl logs --tail=10 apache<br />
master$ kubectl logs --since=24h apache # or 10s, 2m, etc.<br />
master$ kubectl logs -f apache # follow the logs<br />
master$ kubectl logs -f -c apache apache # where -c is the container ID<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Autoscaling and scaling Pods===<br />
<br />
master$ kubectl run myautoscale --image=latest123/apache --port=80 --labels=app=myautoscale<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 47s k8s.minion3.dev<br />
</pre><br />
<br />
* Create an autoscale definition:<br />
master$ kubectl autoscale deployment myautoscale --min=2 --max=6 --cpu-percent=80<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 2 2 2 2 4m<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 3m k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4s k8s.minion2.dev<br />
</pre><br />
<br />
* Scale up an already autoscaled deployment:<br />
master$ kubectl scale --current-replicas=2 --replicas=4 deployment/myautoscale<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 4 4 4 4 8m<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-2rxhp 1/1 Running 0 8s k8s.minion1.dev<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 7m k8s.minion3.dev<br />
myautoscale-3243017378-ozxs8 1/1 Running 0 8s k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4m k8s.minion2.dev<br />
</pre><br />
<br />
* Scale down:<br />
master$ kubectl scale --current-replicas=4 --replicas=2 deployment/myautoscale<br />
<br />
Note: You can not scale down past the original minimum number of pods/containers specified in the original autoscale deployment (i.e., min=2 in our example).<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myautoscale<br />
<br />
===Failure and recovery===<br />
<br />
master$ kubectl run myrecovery --image=latest123/apache --port=80 --replicas=2 --labels=app=myrecovery<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myrecovery 2 2 2 2 6s<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-5xu8f 1/1 Running 0 12s k8s.minion1.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 12s k8s.minion2.dev<br />
</pre><br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the minions/nodes (so we have a total of 2 nodes online):<br />
minion1$ systemctl stop docker kubelet kube-proxy<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-qyi04 1/1 Running 0 7m k8s.minion3.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 14m k8s.minion2.dev<br />
</pre><br />
Pod switch from minion1 to minion3.<br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the remaining online minions/nodes (so we have a total of 1 node online):<br />
minion2$ systemctl stop docker kubelet kube-proxy<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-b5tim 1/1 Running 0 2m k8s.minion3.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 17m k8s.minion3.dev<br />
</pre><br />
Both Pods are now running on minion3, the only available node.<br />
<br />
* Start up Kubernetes- and Docker-related services again on minion1 and delete one of the Pods:<br />
minion1$ systemctl start docker kubelet kube-proxy<br />
master$ kubectl delete pod myrecovery-563119102-b5tim<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-8unzg 1/1 Running 0 1m k8s.minion1.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 20m k8s.minion3.dev<br />
</pre><br />
Pods are now running on separate nodes.<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployments/myrecovery<br />
<br />
==Minikube==<br />
[https://github.com/kubernetes/minikube Minikube] is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.<br />
<br />
* Install Minikube:<br />
$ curl -Lo minikube <nowiki>https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64</nowiki> \<br />
&& chmod +x minikube && sudo mv minikube /usr/local/bin/<br />
<br />
* Install kubectl<br />
$ curl -Lo kubectl <nowiki>https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl</nowiki> \<br />
&& chmod +x kubectl && sudo mv kubectl /usr/local/bin/<br />
<br />
* Test install<br />
$ minikube start<br />
#~OR~<br />
$ minikube start --memory 4096 # give it 4GB of RAM<br />
$ minikube status<br />
$ minikube dashboard<br />
$ kubectl config view<br />
$ kubectl cluster-info<br />
<br />
NOTE: If you have an old version of minikube installed, you should probably do the following before upgrading to a much newer version:<br />
$ minikube delete --all --purge<br />
<br />
Get the details on the CLI options for kubectl [https://kubernetes.io/docs/reference/kubectl/overview/ here].<br />
<br />
Using the <code>`kubectl proxy`</code> command, kubectl will authenticate with the API Server on the Master Node and would make the dashboard available on <nowiki>http://localhost:8001/ui</nowiki>:<br />
<br />
$ kubectl proxy<br />
Starting to serve on 127.0.0.1:8001<br />
<br />
After running the above command, we can access the dashboard at <code><nowiki>http://127.0.0.1:8001/ui</nowiki></code>.<br />
<br />
Once the kubectl proxy is configured, we can send requests to localhost on the proxy port:<br />
<br />
$ curl <nowiki>http://localhost:8001/</nowiki><br />
$ curl <nowiki>http://localhost:8001/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "8",<br />
"gitVersion": "v1.8.0",<br />
"gitCommit": "0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4",<br />
"gitTreeState": "clean",<br />
"buildDate": "2017-11-29T22:43:34Z",<br />
"goVersion": "go1.9.1",<br />
"compiler": "gc",<br />
"platform": "linux/amd64"<br />
}<br />
</pre><br />
<br />
Without kubectl proxy configured, we can get the Bearer Token using kubectl, and then send it with the API request. A Bearer Token is an access token which is generated by the authentication server (the API server on the Master Node) and given back to the client. Using that token, the client can connect back to the Kubernetes API server without providing further authentication details, and then, access resources.<br />
<br />
* Get the k8s token:<br />
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | awk '/^default/{print $1}') | awk '/^token/{print $2}')<br />
<br />
* Get the k8s API server endpoint:<br />
$ APISERVER=$(kubectl config view | awk '/https/{print $2}')<br />
<br />
* Access the API Server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}<br />
<br />
===Using Minikube as a local Docker registry===<br />
<br />
Sometimes it is useful to have a local Docker registry for Kubernetes to pull images from. As the Minikube [https://github.com/kubernetes/minikube/blob/0c616a6b42b28a1aab8397f5a9061f8ebbd9f3d9/README.md#reusing-the-docker-daemon README] describes, you can reuse the Docker daemon running within Minikube with <code>eval $(minikube docker-env)</code> to build and pull images from.<br />
<br />
To use an image without uploading it to some external resgistry (e.g., Docker Hub), you can follow these steps:<br />
* Set the environment variables with <code>eval $(minikube docker-env)</code><br />
* Build the image with the Docker daemon of Minikube (e.g., <code>docker build -t my-image .</code>)<br />
* Set the image in the pod spec like the build tag (e.g., <code>my-image</code>)<br />
* Set the <code>imagePullPolicy</code> to <code>Never</code>, otherwise Kubernetes will try to download the image.<br />
<br />
Important note: You have to run <code>eval $(minikube docker-env)</code> on each terminal you want to use since it only sets the environment variables for the current shell session.<br />
<br />
===Working with our Minikube-based Kubernetes cluster===<br />
<br />
;Kubernetes Object Model<br />
<br />
Kubernetes has a very rich object model, with which it represents different persistent entities in the Kubernetes cluster. Those entities describe:<br />
<br />
* What containerized applications we are running and on which node<br />
* Application resource consumption<br />
* Different policies attached to applications, like restart/upgrade policies, fault tolerance, etc.<br />
<br />
With each object, we declare our intent or desired state using the '''spec''' field. The Kubernetes system manages the '''status''' field for objects, in which it records the actual state of the object. At any given point in time, the Kubernetes Control Plane tries to match the object's actual state to the object's desired state.<br />
<br />
Examples of Kubernetes objects are Pods, Deployments, ReplicaSets, etc.<br />
<br />
To create an object, we need to provide the '''spec''' field to the Kubernetes API Server. The '''spec''' field describes the desired state, along with some basic information, like the name. The API request to create the object must have the '''spec''' field, as well as other details, in a JSON format. Most often, we provide an object's definition in a YAML file, which is converted by kubectl in a JSON payload and sent to the API Server.<br />
<br />
Below is an example of a ''Deployment'' object:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
With the '''apiVersion''' field in the example above, we mention the API endpoint on the API Server which we want to connect to. Note that you can see what API version to use with the following call to the API server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}/apis/apps<br />
Use the '''preferredVersion''' for most cases.<br />
<br />
With the '''kind''' field, we mention the object type &mdash; in our case, we have '''Deployment'''. With the '''metadata''' field, we attach the basic information to objects, like the name. Notice that in the above we have two '''spec''' fields ('''spec''' and '''spec.template.spec'''). With '''spec''', we define the desired state of the deployment. In our example, we want to make sure that, at any point in time, at least 3 ''Pods'' are running, which are created using the Pod template defined in '''spec.template'''. In '''spec.template.spec''', we define the desired state of the Pod (here, our Pod would be created using nginx:1.7.9).<br />
<br />
Once the object is created, the Kubernetes system attaches the '''status''' field to the object.<br />
<br />
;Connecting users to Pods<br />
<br />
To access the application, a user/client needs to connect to the Pods. As Pods are ephemeral in nature, resources like IP addresses allocated to it cannot be static. Pods could die abruptly or be rescheduled based on existing requirements.<br />
<br />
As an example, consider a scenario in which a user/client is connecting to a Pod using its IP address. Unexpectedly, the Pod to which the user/client is connected dies and a new Pod is created by the controller. The new Pod will have a new IP address, which will not be known automatically to the user/client of the earlier Pod. To overcome this situation, Kubernetes provides a higher-level abstraction called ''[https://kubernetes.io/docs/concepts/services-networking/service/ Service]'', which logically groups Pods and a policy to access them. This grouping is achieved via Labels and Selectors (see above).<br />
<br />
So, for our example, we would use Selectors (e.g., "<code>app==frontend</code>" and "<code>app==db</code>") to group our Pods into two logical groups. We can assign a name to the logical grouping, referred to as a "service name". In our example, we have created two Services, <code>frontend-svc</code> and <code>db-svc</code>, and they have the "<code>app==frontend</code>" and the "<code>app==db</code>" Selectors, respectively.<br />
<br />
The following is an example of a Service object:<br />
<pre><br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: frontend-svc<br />
spec:<br />
selector:<br />
app: frontend<br />
ports:<br />
- protocol: TCP<br />
port: 80<br />
targetPort: 5000<br />
</pre><br />
<br />
in which we are creating a <code>frontend-svc</code> Service by selecting all the Pods that have the Label "<code>app</code>" equal to "<code>frontend</code>". By default, each Service also gets an IP address, which is routable only inside the cluster. In our case, we have 172.17.0.4 and 172.17.0.5 IP addresses for our <code>frontend-svc</code> and <code>db-svc</code> Services, respectively. The IP address attached to each Service is also known as the ClusterIP for that Service.<br />
<br />
+------------------------------------+<br />
| select: app==frontend | container (app:frontend; 10.0.1.3)<br />
| service=frontend-svc (172.17.0.4) |------> container (app:frontend; 10.0.1.4)<br />
+------------------------------------+ container (app:frontend; 10.0.1.5)<br />
^<br />
/<br />
/<br />
user/client<br />
\<br />
\<br />
v<br />
+------------------------------------+<br />
| select: app==db |------> container (app:db; 10.0.1.10)<br />
| service=db-svc (172.17.0.5) |<br />
+------------------------------------+<br />
<br />
The user/client now connects to a Service via ''its'' IP address, which forwards the traffic to one of the Pods attached to it. A Service does the load balancing while selecting the Pods for forwarding the data/traffic.<br />
<br />
While forwarding the traffic from the Service, we can select the target port on the Pod. In our example, for <code>frontend-svc</code>, we will receive requests from the user/client on port 80. We will then forward these requests to one of the attached Pods on port 5000. If the target port is not defined explicitly, then traffic will be forwarded to Pods on the port on which the Service receives traffic.<br />
<br />
A tuple of Pods, IP addresses, along with the <code>targetPort</code> is referred to as a ''Service Endpoint''. In our case, <code>frontend-svc</code> has 3 Endpoints: <code>10.0.1.3:5000</code>, <code>10.0.1.4:5000</code>, and <code>10.0.1.5:5000</code>.<br />
<br />
===kube-proxy===<br />
All of the Worker Nodes run a daemon called kube-proxy, which watches the API Server on the Master Node for the addition and removal of Services and endpoints. For each new Service, on each node, kube-proxy configures the IPtables rules to capture the traffic for its ClusterIP and forwards it to one of the endpoints. When the Service is removed, kube-proxy removes the IPtables rules on all nodes as well.<br />
<br />
===Service discovery===<br />
As Services are the primary mode of communication in Kubernetes, we need a way to discover them at runtime. Kubernetes supports two methods of discovering a Service:<br />
<br />
;Environment Variables : As soon as the Pod starts on any Worker Node, the kubelet daemon running on that node adds a set of environment variables in the Pod for all active Services. For example, if we have an active Service called <code>redis-master</code>, which exposes port 6379, and its ClusterIP is 172.17.0.6, then, on a newly created Pod, we can see the following environment variables:<br />
<br />
REDIS_MASTER_SERVICE_HOST=172.17.0.6<br />
REDIS_MASTER_SERVICE_PORT=6379<br />
REDIS_MASTER_PORT=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp<br />
REDIS_MASTER_PORT_6379_TCP_PORT=6379<br />
REDIS_MASTER_PORT_6379_TCP_ADDR=172.17.0.6<br />
<br />
With this solution, we need to be careful while ordering our Services, as the Pods will not have the environment variables set for Services which are created after the Pods are created.<br />
<br />
;DNS : Kubernetes has an add-on for DNS, which creates a DNS record for each Service and its format is like <code>my-svc.my-namespace.svc.cluster.local</code>. Services within the same namespace can reach other services with just their name. For example, if we add a Service <code>redis-master</code> in the <code>my-ns</code> Namespace, then all the Pods in the same Namespace can reach to the redis Service just by using its name, <code>redis-master</code>. Pods from other Namespaces can reach the Service by adding the respective Namespace as a suffix, like <code>redis-master.my-ns</code>.<br />
: This is the most common and highly recommended solution. For example, in the previous section's image, we have seen that an internal DNS is configured, which maps our services <code>frontend-svc</code> and <code>db-svc</code> to 172.17.0.4 and 172.17.0.5, respectively.<br />
<br />
===Service Type===<br />
While defining a Service, we can also choose its access scope. We can decide whether the Service:<br />
<br />
* is only accessible within the cluster;<br />
* is accessible from within the cluster and the external world; or<br />
* maps to an external entity which resides outside the cluster.<br />
<br />
Access scope is decided by ''ServiceType'', which can be mentioned when creating the Service.<br />
<br />
;ClusterIP : (the default ''ServiceType''.) A Service gets its Virtual IP address using the ClusterIP. That IP address is used for communicating with the Service and is accessible only within the cluster. <br />
<br />
;NodePort : With this ''ServiceType'', in addition to creating a ClusterIP, a port from the range '''30000-32767''' is mapped to the respective service from all the Worker Nodes. For example, if the mapped NodePort is 32233 for the service <code>frontend-svc</code>, then, if we connect to any Worker Node on port 32233, the node would redirect all the traffic to the assigned ClusterIP (172.17.0.4).<br />
: By default, while exposing a NodePort, a random port is automatically selected by the Kubernetes Master from the port range '''30000-32767'''. If we do not want to assign a dynamic port value for NodePort, then, while creating the Service, we can also give a port number from the earlier specific range.<br />
: The NodePort ServiceType is useful when we want to make our services accessible from the external world. The end-user connects to the Worker Nodes on the specified port, which forwards the traffic to the applications running inside the cluster. To access the application from the external world, administrators can configure a reverse proxy outside the Kubernetes cluster and map the specific endpoint to the respective port on the Worker Nodes.<br />
<br />
;LoadBalancer: With this ''ServiceType'', we have the following:<br />
:* NodePort and ClusterIP Services are automatically created, and the external load balancer will route to them;<br />
:* The Services are exposed at a static port on each Worker Node; and<br />
:* The Service is exposed externally using the underlying Cloud provider's load balancer feature.<br />
: The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS.<br />
<br />
;ExternalIP : A Service can be mapped to an ExternalIP address if it can route to one or more of the Worker Nodes. Traffic that is ingressed into the cluster with the ExternalIP (as destination IP) on the Service port, gets routed to one of the the Service endpoints. (Note that ExternalIPs are not managed by Kubernetes. The cluster administrator(s) must have configured the routing to map the ExternalIP address to one of the nodes.)<br />
<br />
;ExternalName : a special ''ServiceType'', which has no Selectors and does not define any endpoints. When accessed within the cluster, it returns a CNAME record of an externally configured service.<br />
: The primary use case of this ServiceType is to make externally configured services like <code>my-database.example.com</code> available inside the cluster, using just the name, like <code>my-database</code>, to other services inside the same Namespace.<br />
<br />
===Deploying a application===<br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
EOF<br />
</pre><br />
<br />
$ kubectl get service<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h<br />
web-service NodePort 10.104.107.132 <none> 80:32610/TCP 7m<br />
<br />
Note that "<code>32610</code>" port.<br />
<br />
* Get the IP address of your Minikube k8s cluster<br />
$ minikube ip<br />
192.168.99.100<br />
#~OR~<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
* Now, check that your web service is serving up a default Nginx website:<br />
$ curl -I <nowiki>http://192.168.99.100:32610</nowiki><br />
HTTP/1.1 200 OK<br />
Server: nginx/1.13.8<br />
Date: Thu, 11 Jan 2018 00:27:51 GMT<br />
Content-Type: text/html<br />
Content-Length: 612<br />
Last-Modified: Wed, 10 Jan 2018 04:10:03 GMT<br />
Connection: keep-alive<br />
ETag: "5a55921b-264"<br />
Accept-Ranges: bytes<br />
<br />
Looks good!<br />
<br />
Finally, destroy the webserver deployment:<br />
$ kubectl delete deployments webserver<br />
<br />
===Using Ingress with Minikube===<br />
<br />
* First check that the Ingress add-on is enabled:<br />
$ minikube addons list | grep ingress<br />
- ingress: disabled<br />
<br />
If it is not, enable it with:<br />
$ minikube addons enable ingress<br />
$ minikube addons list | grep ingress<br />
- ingress: enabled<br />
<br />
* Create an Echo Server Deployment:<br />
<pre><br />
$ cat << EOF >deploy-echoserver.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: echoserver<br />
template:<br />
metadata:<br />
labels:<br />
run: echoserver<br />
spec:<br />
containers:<br />
- image: gcr.io/google_containers/echoserver:1.4<br />
imagePullPolicy: IfNotPresent<br />
name: echoserver<br />
ports:<br />
- containerPort: 8080<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-echoserver.yml<br />
<br />
* Create the Cheddar cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-cheddar-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: cheddar-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:cheddar<br />
imagePullPolicy: IfNotPresent<br />
name: cheddar-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-stilton-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: stilton-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:stilton<br />
imagePullPolicy: IfNotPresent<br />
name: stilton-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create the Echo Server Service:<br />
<pre><br />
$ cat << EOF >svc-echoserver.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 31116<br />
port: 8080<br />
protocol: TCP<br />
targetPort: 8080<br />
selector:<br />
run: echoserver<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-echoserver.yml<br />
<br />
* Create the Cheddar cheese Service:<br />
<pre><br />
$ cat << EOF >svc-cheddar-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 32467<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: cheddar-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
</pre><br />
$ kubectl create --validate -f svc-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Service:<br />
<pre><br />
$ cat << EOF >svc-stilton-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 30197<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: stilton-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-stilton-cheese.yml<br />
<br />
* Create the Ingress for the above Services:<br />
<pre><br />
$ cat << EOF >ingress-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: ingress-cheese<br />
annotations:<br />
nginx.ingress.kubernetes.io/rewrite-target: /<br />
spec:<br />
backend:<br />
serviceName: default-http-backend<br />
servicePort: 80<br />
rules:<br />
- host: myminikube.info<br />
http:<br />
paths:<br />
- path: /<br />
backend:<br />
serviceName: echoserver<br />
servicePort: 8080<br />
- host: cheeses.all<br />
http:<br />
paths:<br />
- path: /stilton<br />
backend:<br />
serviceName: stilton-cheese<br />
servicePort: 80<br />
- path: /cheddar<br />
backend:<br />
serviceName: cheddar-cheese<br />
servicePort: 80<br />
</pre><br />
$ kubectl create --validate -f ingress-cheese.yml<br />
<br />
* Check that everything is up:<br />
<pre><br />
$ kubectl get all<br />
NAME READY STATUS RESTARTS AGE<br />
pod/cheddar-cheese-d6d6587c7-4bgcz 1/1 Running 0 12m<br />
pod/echoserver-55f97d5bff-pdv65 1/1 Running 0 12m<br />
pod/stilton-cheese-6d64cbc79-g7h4w 1/1 Running 0 12m<br />
<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
service/cheddar-cheese NodePort 10.109.238.92 <none> 80:32467/TCP 12m<br />
service/echoserver NodePort 10.98.60.194 <none> 8080:31116/TCP 12m<br />
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h<br />
service/stilton-cheese NodePort 10.108.175.207 <none> 80:30197/TCP 12m<br />
<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
deployment.apps/cheddar-cheese 1 1 1 1 12m<br />
deployment.apps/echoserver 1 1 1 1 12m<br />
deployment.apps/stilton-cheese 1 1 1 1 12m<br />
<br />
NAME DESIRED CURRENT READY AGE<br />
replicaset.apps/cheddar-cheese-d6d6587c7 1 1 1 12m<br />
replicaset.apps/echoserver-55f97d5bff 1 1 1 12m<br />
replicaset.apps/stilton-cheese-6d64cbc79 1 1 1 12m<br />
<br />
$ kubectl get ing<br />
NAME HOSTS ADDRESS PORTS AGE<br />
ingress-cheese myminikube.info,cheeses.all 10.0.2.15 80 12m<br />
</pre><br />
<br />
* Add your host aliases:<br />
$ echo "$(minikube ip) myminikube.info cheeses.all" | sudo tee -a /etc/hosts<br />
<br />
* Now, either using your browser or [[curl]], check that you can reach all of the endpoints defined in the Ingress:<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/cheddar/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/stilton/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null myminikube.info # Should return '200'<br />
<br />
* You can also see the Nginx logs for the above requests with:<br />
$ kubectl --namespace kube-system logs \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller<br />
<br />
* You can also view the Nginx configuration file (and the settings created by the above Ingress) with:<br />
$ NGINX_POD=$(kubectl --namespace kube-system get pods \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller \<br />
--output jsonpath='{.items[0].metadata.name}')<br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- cat /etc/nginx/nginx.conf<br />
<br />
* Get the version of the Nginx Ingress controller installed:<br />
<pre><br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- /nginx-ingress-controller --version<br />
-------------------------------------------------------------------------------<br />
NGINX Ingress controller<br />
Release: 0.19.0<br />
Build: git-05025d6<br />
Repository: https://github.com/kubernetes/ingress-nginx.git<br />
-------------------------------------------------------------------------------<br />
</pre><br />
<br />
==Kubectl==<br />
<br />
<code>kubectl</code> controls the Kubernetes cluster manager.<br />
<br />
* View your current configuration:<br />
$ kubectl config view<br />
<br />
* Switch between clusters:<br />
$ kubectl config use-context <context_name><br />
<br />
* Remove a cluster:<br />
$ kubectl config unset contexts.<context_name><br />
$ kubectl config unset users.<user_name><br />
$ kubectl config unset clusters.<cluster_name><br />
<br />
* Sort Pods by age:<br />
$ kubectl get po --sort-by='{.firstTimestamp}'.<br />
$ kubectl get pods --all-namespaces --sort-by=.metadata.creationTimestamp<br />
<br />
* Backup all primitives deployed in a given k8s cluster:<br />
<pre><br />
$ kubectl api-resources --verbs=list --namespaced -o name \<br />
| xargs -n1 -I{} bash -c "kubectl get {} --all-namespaces -oyaml && echo ---" \<br />
> k8s_backup.yaml<br />
</pre><br />
<br />
===kubectl explain===<br />
<br />
;List the fields for supported resources.<br />
<br />
* Get the documentation of a resource (aka "kind") and its fields:<br />
<pre><br />
$ kubectl explain deployment<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
APIVersion defines the versioned schema of this representation of an<br />
object. Servers should convert recognized schemas to the latest internal<br />
value, and may reject unrecognized values. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources<br />
<br />
kind <string><br />
Kind is a string value representing the REST resource this object<br />
represents. Servers may infer this from the endpoint the client submits<br />
requests to. Cannot be updated. In CamelCase. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds<br />
<br />
metadata <Object><br />
Standard object metadata.<br />
<br />
spec <Object><br />
Specification of the desired behavior of the Deployment.<br />
<br />
status <Object><br />
Most recently observed status of the Deployment<br />
</pre><br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
</pre><br />
<br />
* Get a list of ''all'' allowable fields for a given primitive:<br />
<pre><br />
$ kubectl explain deployment --recursive | head<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
kind <string><br />
metadata <Object><br />
</pre><br />
<br />
* Get documentation ("man page"-style) for a given field in a given primitive:<br />
<pre><br />
$ kubectl explain deployment.status.availableReplicas<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
FIELD: availableReplicas <integer><br />
<br />
DESCRIPTION:<br />
Total number of available pods (ready for at least minReadySeconds)<br />
targeted by this deployment.<br />
</pre><br />
<br />
===Merge kubeconfig files===<br />
<br />
* Reference which kubeconfig files you wish to merge:<br />
$ export KUBECONFIG=$HOME/.kube/dev.yaml:$HOME/.kube/prod.yaml<br />
<br />
* Flatten them:<br />
$ kubectl config view --flatten >> $HOME/.kube/config<br />
<br />
* Unset:<br />
$ unset KUBECONFIG<br />
<br />
Merge complete.<br />
<br />
==Namespaces==<br />
<br />
See: [https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces] in the official documentation.<br />
<br />
; Create a Namespace<br />
<br />
<pre><br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: dev<br />
</pre><br />
<br />
==Pods==<br />
<br />
; Create a Pod that has an Init Container<br />
<br />
In this example, I will create a Pod that has one application Container and one Init Container. The init container runs to completion before the application container starts.<br />
<br />
<pre><br />
$ cat << EOF >init-demo.yml<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: init-demo<br />
labels:<br />
app: demo<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: /usr/share/nginx/html<br />
# These containers are run during pod initialization<br />
initContainers:<br />
- name: install<br />
image: busybox<br />
command:<br />
- wget<br />
- "-O"<br />
- "/work-dir/index.html"<br />
- https://example.com<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: "/work-dir"<br />
dnsPolicy: Default<br />
volumes:<br />
- name: workdir<br />
emptyDir: {}<br />
EOF<br />
</pre><br />
<br />
The above Pod YAML will first create the init container using the busybox image, which will download the HTML of the example.com website and save it to a file (<code>index.html</code>) on the Pod volume called "workdir". After the init container completes, the Nginx container starts and presents the <code>index.html</code> on port 80 (the file is located at <code>/usr/share/nginx/index.html</code> inside the Nginx container as a volume mount).<br />
<br />
* Now, create this Pod:<br />
$ kubectl create --validate -f init-demo.yml<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat << EOF >example.yml<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: example<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: demo<br />
</pre><br />
<br />
* Check that we can get the header of <nowiki>https://example.com</nowiki>:<br />
$ curl -sI $(kubectl get svc/foo-svc -o jsonpath='{.spec.clusterIP}'):8000 | grep ^HTTP<br />
HTTP/1.1 200 OK<br />
<br />
==Deployments==<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' controller provides declarative updates for Pods and ReplicaSets.<br />
<br />
You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.<br />
<br />
; Creating a Deployment<br />
<br />
The following is an example of a Deployment. It creates a ReplicaSet to bring up three [https://hub.docker.com/_/nginx/ Nginx] Pods:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Check the syntax of the Deployment (YAML):<br />
$ kubectl create -f nginx-deployment.yml --dry-run<br />
deployment.apps/nginx-deployment created (dry run)<br />
<br />
* Create the Deployment:<br />
$ kubectl create --record -f nginx-deployment.yml <br />
deployment "nginx-deployment" created<br />
Note: By appending <code>--record</code> to the above command, we are telling the API to record the current command in the annotations of the created or updated resource. This is useful for future review, such as investigating which commands were executed in each Deployment revision.<br />
<br />
* Get information about our Deployment:<br />
$ kubectl get deployments<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 24s<br />
<br />
$ kubectl describe deployment/nginx-deployment<br />
<pre><br />
Name: nginx-deployment<br />
Namespace: default<br />
CreationTimestamp: Tue, 30 Jan 2018 23:28:43 +0000<br />
Labels: app=nginx<br />
Annotations: deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Selector: app=nginx<br />
Replicas: 3 desired | 3 updated | 3 total | 0 available | 3 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 25% max unavailable, 25% max surge<br />
Pod Template:<br />
Labels: app=nginx<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Conditions:<br />
Type Status Reason<br />
---- ------ ------<br />
Available False MinimumReplicasUnavailable<br />
Progressing True ReplicaSetUpdated<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-6c54bd5869 (3/3 replicas created)<br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal ScalingReplicaSet 28s deployment-controller Scaled up replica set nginx-deployment-6c54bd5869 to 3<br />
</pre><br />
<br />
* Get information about the ReplicaSet created by the above Deployment:<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-6c54bd5869 3 3 3 3m<br />
<br />
$ kubectl describe rs/nginx-deployment-6c54bd5869<br />
<pre><br />
Name: nginx-deployment-6c54bd5869<br />
Namespace: default<br />
Selector: app=nginx,pod-template-hash=2710681425<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Annotations: deployment.kubernetes.io/desired-replicas=3<br />
deployment.kubernetes.io/max-replicas=4<br />
deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Controlled By: Deployment/nginx-deployment<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-k9mh4<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-pphjt<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-n4fj5<br />
</pre><br />
<br />
* Get information about the Pods created by this Deployment:<br />
$ kubectl get pods --show-labels -l app=nginx -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE LABELS<br />
nginx-deployment-6c54bd5869-k9mh4 1/1 Running 0 5m 10.244.1.5 k8s.worker1.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-n4fj5 1/1 Running 0 5m 10.244.1.6 k8s.worker2.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-pphjt 1/1 Running 0 5m 10.244.1.7 k8s.worker3.local app=nginx,pod-template-hash=2710681425<br />
<br />
;Updating a Deployment<br />
<br />
Note: A Deployment's rollout is triggered if, and only if, the Deployment's pod template (that is, <code>.spec.template</code>) is changed (for example, if the labels or container images of the template are updated). Other updates, such as scaling the Deployment, do not trigger a rollout.<br />
<br />
Suppose that we want to update the Nginx Pods in the above Deployment to use the <code>nginx:1.9.1</code> image instead of the <code>nginx:1.7.9</code> image.<br />
<br />
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
deployment "nginx-deployment" image updated<br />
<br />
Alternatively, we can edit the Deployment and change <code>.spec.template.spec.containers[0].image</code> from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>:<br />
<br />
$ kubectl edit deployment/nginx-deployment<br />
deployment "nginx-deployment" edited<br />
<br />
* Check on the rollout status:<br />
<pre><br />
$ kubectl rollout status deployment/nginx-deployment<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
deployment "nginx-deployment" successfully rolled out<br />
</pre><br />
<br />
* Get information about the updated Deployment:<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 18m<br />
<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-5964dfd755 3 3 3 1m # <- new ReplicaSet using nginx:1.9.1<br />
nginx-deployment-6c54bd5869 0 0 0 17m # <- old ReplicaSet using nginx:1.7.9<br />
<br />
$ kubectl rollout history deployment/nginx-deployment<br />
deployments "nginx-deployment"<br />
REVISION CHANGE-CAUSE<br />
1 kubectl create --record=true --filename=nginx-deployment.yml<br />
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
<br />
$ kubectl rollout history deployment/nginx-deployment --revision=2<br />
<br />
deployments "nginx-deployment" with revision #2<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=1520898311<br />
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
Containers:<br />
nginx:<br />
Image: nginx:1.9.1<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
<br />
; Rolling back to a previous revision<br />
<br />
Undo the current rollout and rollback to the previous revision:<br />
$ kubectl rollout undo deployment/nginx-deployment<br />
deployment "nginx-deployment" rolled back<br />
<br />
Alternatively, you can rollback to a specific revision by specify that in --to-revision:<br />
$ kubectl rollout undo deployment/nginx-deployment --to-revision=1<br />
deployment "nginx-deployment" rolled back<br />
<br />
==Volume management==<br />
On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. First, when a container crashes, kubelet will restart it, but the files will be lost (i.e., the container starts with a clean state). Second, when running containers together in a Pod it is often necessary to share files between those containers. The Kubernetes ''[https://kubernetes.io/docs/concepts/storage/volumes/ Volumes]'' abstraction solves both of these problems. A Volume is essentially a directory backed by a storage medium. The storage medium and its content are determined by the Volume Type.<br />
<br />
In Kubernetes, a Volume is attached to a Pod and shared among the containers of that Pod. The Volume has the same life span as the Pod, and it outlives the containers of the Pod &mdash; this allows data to be preserved across container restarts.<br />
<br />
Kubernetes resolves the problem of persistent storage with the Persistent Volume subsystem, which provides APIs for users and administrators to manage and consume storage. To manage the Volume, it uses the PersistentVolume (PV) API resource type, and to consume it, it uses the PersistentVolumeClaim (PVC) API resource type.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes PersistentVolume] (PV) : a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims PersistentVolumeClaim] (PVC) : a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Persistent Volume Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).<br />
<br />
A Persistent Volume is a network-attached storage in the cluster, which is provisioned by the administrator.<br />
<br />
Persistent Volumes can be provisioned statically by the administrator, or dynamically, based on the StorageClass resource. A StorageClass contains pre-defined provisioners and parameters to create a Persistent Volume.<br />
<br />
A PersistentVolumeClaim (PVC) is a request for storage by a user. Users request Persistent Volume resources based on size, access modes, etc. Once a suitable Persistent Volume is found, it is bound to a Persistent Volume Claim. After a successful bind, the Persistent Volume Claim resource can be used in a Pod. Once a user finishes its work, the attached Persistent Volumes can be released. The underlying Persistent Volumes can then be reclaimed and recycled for future usage. See [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims Persistent Volumes] for details.<br />
<br />
;Access Modes<br />
* Each of the following access modes ''must'' be supported by storage resource provider (e.g., NFS, AWS EBS, etc.) if they are to be used.<br />
* ReadWriteOnce (RWO) &mdash; volume can be mounted as read/write by one node only.<br />
* ReadOnlyMany (ROX) &mdash; volume can be mounted read-only by many nodes.<br />
* ReadWriteMany (RWX) &mdash; volume can be mounted read/write by many nodes.<br />
A volume can only be mounted using one access mode at a time, regardless of the modes that are supported.<br />
<br />
; Example #1 - Using Host Volumes<br />
As an example of how to use volumes, we can modify our previous "webserver" Deployment (see above) to look like the following:<br />
<br />
$ cat webserver.yml<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: hostvol<br />
mountPath: /usr/share/nginx/html<br />
volumes:<br />
- name: hostvol<br />
hostPath:<br />
path: /home/docker/vol<br />
</pre><br />
<br />
And use the same Service:<br />
$ cat webserver-svc.yml<br />
<pre><br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
</pre><br />
<br />
Then create the deployment and service:<br />
$ kubectl create -f webserver.yml<br />
$ kubectl create -f webserver-svc.yml<br />
<br />
Then, SSH into the webserver and run the following commands<br />
$ minikube ssh<br />
minikube> mkdir -p /home/docker/vol<br />
minikube> echo "Christoph testing" > /home/docker/vol/index.html<br />
minikube> exit<br />
<br />
Get the webserver IP and port:<br />
$ minikube ip<br />
192.168.99.100<br />
$ kubectl get svc/web-service -o json | jq '.spec.ports[].nodePort'<br />
32610<br />
# OR<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
$ curl <nowiki>http://192.168.99.100:32610</nowiki><br />
Christoph testing<br />
<br />
; Example #2 - Using NFS<br />
<br />
* First, create a server to host your NFS server (e.g., <code>`sudo apt-get install -y nfs-kernel-server`</code>).<br />
* On your NFS server, do the following:<br />
$ mkdir -p /var/nfs/general<br />
$ cat << EOF >>/etc/exports<br />
/var/nfs/general 10.100.1.2(rw,sync,no_subtree_check) 10.100.1.3(rw,sync,no_subtree_check) 10.100.1.4(rw,sync,no_subtree_check)<br />
EOF<br />
where the <code>10.x</code> IPs are the private IPs of your k8s nodes (both Master and Worker nodes).<br />
* Make sure to install <code>nfs-common</code> on each of the k8s nodes that will be connecting to the NFS server.<br />
<br />
Now, on the k8s Master node, create a Persistent Volume (PV) and Persistent Volume Claim (PVC):<br />
<br />
* Create a Persistent Volume (PV):<br />
$ cat << EOF >pv.yml<br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 10.100.1.10 # NFS Server's private IP<br />
readOnly: false<br />
EOF<br />
$ kubectl create --validate -f pv.yml<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Available<br />
* Create a Persistent Volume Claim (PVC):<br />
$ cat << EOF >pvc.yml<br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
EOF<br />
$ kubectl create --validate -f pvc.yml<br />
$ kubectl get pvc<br />
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE<br />
nfs-pvc Bound mypv 1Gi RWX<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Bound default/nfs-pvc 11m<br />
<br />
* Create a Pod:<br />
$ cat << EOF >nfs-pod.yml <br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nfs-pod<br />
labels:<br />
name: nfs-pod<br />
spec:<br />
containers:<br />
- name: nfs-ctn<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
volumeMounts:<br />
- name: nfsvol<br />
mountPath: /tmp<br />
restartPolicy: Always<br />
securityContext:<br />
fsGroup: 65534<br />
runAsUser: 65534<br />
volumes:<br />
- name: nfsvol<br />
persistentVolumeClaim:<br />
claimName: nfs-pvc<br />
EOF<br />
$ kubectl create --validate -f nfs-pod.yml<br />
$ kubectl get pods -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE<br />
busybox 1/1 Running 9 2d 10.244.2.22 k8s.worker01.local<br />
<br />
* Get a shell from the <code>nfs-pod</code> Pod:<br />
$ kubectl exec -it nfs-pod -- sh<br />
/ $ df -h<br />
Filesystem Size Used Available Use% Mounted on<br />
172.31.119.58:/var/nfs/general<br />
19.3G 1.8G 17.5G 9% /tmp<br />
...<br />
/ $ touch /tmp/this-is-from-the-pod<br />
<br />
* On the NFS server:<br />
$ ls -l /var/nfs/general/<br />
total 0<br />
-rw-r--r-- 1 nobody nogroup 0 Jan 18 23:32 this-is-from-the-pod<br />
<br />
It works!<br />
<br />
==ConfigMaps and Secrets==<br />
While deploying an application, we may need to pass such runtime parameters like configuration details, passwords, etc. For example, let's assume we need to deploy ten different applications for our customers, and, for each customer, we just need to change the name of the company in the UI. Instead of creating ten different Docker images for each customer, we can just use the template image and pass the customers' names as a runtime parameter. In such cases, we can use the ConfigMap API resource. Similarly, when we want to pass sensitive information, we can use the Secret API resource. Think ''Secrets'' (for confidential data) and ''ConfigMaps'' (for non-confidential data).<br />
<br />
[https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/ ConfigMaps] allow you to decouple configuration artifacts from image content to keep containerized applications portable. Using ConfigMaps, we can pass configuration details as key-value pairs, which can be later consumed by Pods or any other system components, such as controllers. We can create ConfigMaps in two ways:<br />
<br />
* From literal values; and<br />
* From files.<br />
<br />
<br />
;ConfigMaps<br />
<br />
* Create a ConfigMap:<br />
$ kubectl create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2<br />
configmap "my-config" created<br />
$ kubectl get configmaps my-config -o yaml<br />
<pre><br />
apiVersion: v1<br />
data:<br />
key1: value1<br />
key2: value2<br />
kind: ConfigMap<br />
metadata:<br />
creationTimestamp: 2018-01-11T23:57:44Z<br />
name: my-config<br />
namespace: default<br />
resourceVersion: "117110"<br />
selfLink: /api/v1/namespaces/default/configmaps/my-config<br />
uid: 37a43e39-f72b-11e7-8370-08002721601f<br />
</pre><br />
$ kubectl describe configmap/my-config<br />
<pre><br />
Name: my-config<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Data<br />
====<br />
key2:<br />
----<br />
value2<br />
key1:<br />
----<br />
value1<br />
Events: <none><br />
</pre><br />
<br />
; Create a ConfigMap from a configuration file<br />
<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: ConfigMap<br />
metadata:<br />
name: customer1<br />
data:<br />
TEXT1: Customer1_Company<br />
TEXT2: Welcomes You<br />
COMPANY: Customer1 Company Technology, LLC.<br />
EOF<br />
</pre><br />
<br />
We can get the values of the given key as environment variables inside a Pod. In the following example, while creating the Deployment, we are assigning values for environment variables from the customer1 ConfigMap:<br />
<pre><br />
....<br />
containers:<br />
- name: my-app<br />
image: foobar<br />
env:<br />
- name: MONGODB_HOST<br />
value: mongodb<br />
- name: TEXT1<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT1<br />
- name: TEXT2<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT2<br />
- name: COMPANY<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: COMPANY<br />
....<br />
</pre><br />
With the above, we will get the <code>TEXT1</code> environment variable set to <code>Customer1_Company</code>, <code>TEXT2</code> environment variable set to <code>Welcomes You</code>, and so on.<br />
<br />
We can also mount a ConfigMap as a Volume inside a Pod. For each key, we will see a file in the mount path and the content of that file become the respective key's value. For details, see [https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#adding-configmap-data-to-a-volume here].<br />
<br />
You can also use ConfigMaps to configure your cluster to use, as an example, 8.8.8.8 and 8.8.4.4 as its upstream DNS server:<br />
<pre><br />
kind: ConfigMap<br />
apiVersion: v1<br />
metadata:<br />
name: kube-dns<br />
namespace: kube-system<br />
data:<br />
upstreamNameservers: |<br />
["8.8.8.8", "8.8.4.4"]<br />
</pre><br />
<br />
; Secrets<br />
<br />
Objects of type [https://kubernetes.io/docs/concepts/configuration/secret/ Secret] are intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a Secret is safer and more flexible than putting it verbatim in a pod definition or in a docker image.<br />
<br />
As an example, assume that we have a Wordpress blog application, in which our <code>wordpress</code> frontend connects to the [[MySQL]] database backend using a password. While creating the Deployment for <code>wordpress</code>, we can put the MySQL password in the Deployment's YAML file, but the password would not be protected. The password would be available to anyone who has access to the configuration file.<br />
<br />
In situations such as the one we just mentioned, the Secret object can help. With Secrets, we can share sensitive information like passwords, tokens, or keys in the form of key-value pairs, similar to ConfigMaps; thus, we can control how the information in a Secret is used, reducing the risk for accidental exposures. In Deployments or other system components, the Secret object is ''referenced'', without exposing its content.<br />
<br />
It is important to keep in mind that the Secret data is stored as plain text inside etcd. Administrators must limit the access to the API Server and etcd.<br />
<br />
To create a Secret using the <code>`kubectl create secret`</code> command, we need to first create a file with a password, and then pass it as an argument.<br />
<br />
* Create a file with your MySQL password:<br />
$ echo mysqlpasswd | tr -d '\n' > password.txt<br />
<br />
* Create the ''Secret'':<br />
$ kubectl create secret generic mysql-passwd --from-file=password.txt<br />
$ kubectl describe secret/mysql-passwd<br />
<pre><br />
Name: mysql-passwd<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Type: Opaque<br />
<br />
Data<br />
====<br />
password.txt: 11 bytes<br />
</pre><br />
<br />
We can also create a Secret manually, using the YAML configuration file. With Secrets, each object data must be encoded using base64. If we want to have a configuration file for our Secret, we must first get the base64 encoding for our password:<br />
<br />
$ cat password.txt | base64<br />
bXlzcWxwYXNzd2Q==<br />
<br />
and then use it in the configuration file:<br />
<pre><br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysql-passwd<br />
type: Opaque<br />
data:<br />
password: bXlzcWxwYXNzd2Q=<br />
</pre><br />
Note that base64 encoding does not do any encryption and anyone can easily decode it:<br />
<br />
$ echo "bXlzcWxwYXNzd2Q=" | base64 -d # => mysqlpasswd<br />
<br />
Therefore, make sure you do not commit a Secret's configuration file in the source code.<br />
<br />
We can get Secrets to be used by containers in a Pod by mounting them as data volumes, or by exposing them as environment variables.<br />
<br />
We can reference a Secret and assign the value of its key as an environment variable (<code>WORDPRESS_DB_PASSWORD</code>):<br />
<pre><br />
.....<br />
spec:<br />
containers:<br />
- image: wordpress:4.7.3-apache<br />
name: wordpress<br />
env:<br />
- name: WORDPRESS_DB_HOST<br />
value: wordpress-mysql<br />
- name: WORDPRESS_DB_PASSWORD<br />
valueFrom:<br />
secretKeyRef:<br />
name: my-password<br />
key: password.txt<br />
.....<br />
</pre><br />
<br />
Or, we can also mount a Secret as a Volume inside a Pod. A file would be created for each key mentioned in the Secret, whose content would be the respective value. See [https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod here] for details.<br />
<br />
==Ingress==<br />
Among the ServiceTypes mentioned earlier, NodePort and LoadBalancer are the most often used. For the LoadBalancer ServiceType, we need to have the support from the underlying infrastructure. Even after having the support, we may not want to use it for every Service, as LoadBalancer resources are limited and they can increase costs significantly. Managing the NodePort ServiceType can also be tricky at times, as we need to keep updating our proxy settings and keep track of the assigned ports. In this section, we will explore the Ingress API object, which is another method we can use to access our applications from the external world.<br />
<br />
An ''[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingress]'' is a collection of rules that allow inbound connections to reach the cluster Services. With Services, routing rules are attached to a given Service. They exist for as long as the Service exists. If we can somehow decouple the routing rules from the application, we can then update our application without worrying about its external access. This can be done using the Ingress resource. Ingress can provide load balancing, SSL/TLS termination, and name-based virtual hosting and/or routing.<br />
<br />
To allow the inbound connection to reach the cluster Services, Ingress configures a Layer 7 HTTP load balancer for Services and provides the following:<br />
<br />
* TLS (Transport Layer Security)<br />
* Name-based virtual hosting <br />
* Path-based routing<br />
* Custom rules.<br />
<br />
With Ingress, users do not connect directly to a Service. Users reach the Ingress endpoint, and, from there, the request is forwarded to the respective Service. You can see an example of an example Ingress definition below:<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: web-ingress<br />
spec:<br />
rules:<br />
- host: blue.example.com<br />
http:<br />
paths:<br />
- backend: <br />
serviceName: blue-service<br />
servicePort: 80<br />
- host: green.example.com<br />
http:<br />
paths:<br />
- backend:<br />
serviceName: green-service<br />
servicePort: 80<br />
</pre><br />
<br />
According to the example just provided, users requests to both <code>blue.example.com</code> and <code>green.example.com</code> would go to the same Ingress endpoint, and, from there, they would be forwarded to <code>blue-service</code>, and <code>green-service</code>, respectively. Here, we have seen an example of a Name-Based Virtual Hosting Ingress rule. <br />
<br />
We can also have Fan Out Ingress rules, in which we send requests like <code>example.com/blue</code> and <code>example.com/green</code>, which would be forwarded to <code>blue-service</code> and <code>green-service</code>, respectively.<br />
<br />
To secure an Ingress, you must create a ''Secret''. The TLS secret must contain keys named <code>tls.crt</code> and <code>tls.key</code>, which contain the certificate and private key to use for TLS.<br />
<br />
The Ingress resource does not do any request forwarding by itself. All of the magic is done using the ''Ingress Controller''.<br />
<br />
; Ingress Controller<br />
<br />
An Ingress Controller is an application which watches the Master Node's API Server for changes in the Ingress resources and updates the Layer 7 load balancer accordingly. Kubernetes has different Ingress Controllers, and, if needed, we can also build our own. GCE L7 Load Balancer and Nginx Ingress Controller are examples of Ingress Controllers.<br />
<br />
Minikube v0.14.0 and above ships the Nginx Ingress Controller setup as an add-on. It can be easily enabled by running the following command:<br />
<br />
$ minikube addons enable ingress<br />
<br />
Once the Ingress Controller is deployed, we can create an Ingress resource using the <code>kubectl create</code> command. For example, if we create an <code>example-ingress.yml</code> file with the content above, then, we can use the following command to create an Ingress resource:<br />
<br />
$ kubectl create -f example-ingress.yml<br />
<br />
With the Ingress resource we just created, we should now be able to access the blue-service or green-service services using blue.example.com and green.example.com URLs. As our current setup is on minikube, we will need to update the host configuration file on our workstation to the minikube's IP for those URLs:<br />
<br />
$ cat /etc/hosts<br />
127.0.0.1 localhost<br />
::1 localhost<br />
192.168.99.100 blue.example.com green.example.com <br />
<br />
Once this is done, we can now open blue.example.com and green.example.com in a browser and access the application.<br />
<br />
==Labels and Selectors==<br />
''[https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ Labels]'' are key-value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key-value labels defined. Each key must be unique for a given object.<br />
<pre><br />
"labels": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
<br />
;Syntax and character set<br />
<br />
Labels are key-value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (<code>/</code>). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (<code>.</code>), not longer than 253 characters in total, followed by a slash (<code>/</code>). If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. kube-scheduler, kube-controller-manager, kube-apiserver, kubectl, or other third-party automation) which add labels to end-user objects must specify a prefix. The <code>kubernetes.io/</code> prefix is reserved for Kubernetes core components.<br />
<br />
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between.<br />
<br />
;Label selectors<br />
<br />
Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).<br />
<br />
Via a label selector, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.<br />
<br />
The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical AND (<code>&&</code>) operator.<br />
<br />
An empty label selector (that is, one with zero requirements) selects every object in the collection.<br />
<br />
A null label selector (which is only possible for optional selector fields) selects no objects.<br />
<br />
Note: the label selectors of two controllers must not overlap within a namespace, otherwise they will fight with each other.<br />
Note that labels are not restricted to pods. You can apply them to all sorts of objects, such as nodes or services.<br />
<br />
;Examples<br />
<br />
* Label a given node:<br />
$ kubectl label node k8s.worker1.local network=gigabit<br />
<br />
* With ''Equality-based'', one may write:<br />
$ kubectl get pods -l environment=production,tier=frontend<br />
<br />
* Using ''set-based'' requirements:<br />
$ kubectl get pods -l 'environment in (production),tier in (frontend)'<br />
<br />
* Implement the OR operator on values:<br />
$ kubectl get pods -l 'environment in (production, qa)'<br />
<br />
* Restricting negative matching via exists operator:<br />
$ kubectl get pods -l 'environment,environment notin (frontend)'<br />
<br />
* Show the current labels on your pods:<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Add a label to an already running/existing pod:<br />
$ kubectl label pods busybox owner=christoph<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d owner=christoph<br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Select a pod by its label:<br />
$ kubectl get pods --selector owner=christoph<br />
#~OR~<br />
$ kubectl get pods -l owner=christoph<br />
NAME READY STATUS RESTARTS AGE<br />
busybox 1/1 Running 25 9d<br />
<br />
* Delete/remove a given label from a given pod:<br />
$ kubectl label pod busybox owner-<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
<br />
* Get all pods that belong to both the <code>production</code> ''and'' the <code>development</code> environments:<br />
$ kubectl get pods -l 'env in (production, development)'<br />
<br />
; Using Labels to select a Node on which to schedule a Pod:<br />
<br />
* Label a Node that uses SSDs as its primary HDD:<br />
$ kubectl label node k8s.worker1.local hdd=ssd<br />
<br />
<pre><br />
$ cat << EOF >busybox.yml<br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "300"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
nodeSelector: <br />
hdd: ssd<br />
EOF<br />
</pre><br />
<br />
==Annotations==<br />
With ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ Annotations]'', we can attach arbitrary, non-identifying metadata to objects, in a key-value format:<br />
<br />
<pre><br />
"annotations": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.<br />
<br />
In contrast to Labels, annotations are not used to identify and select objects. Annotations can be used to:<br />
<br />
* Store build/release IDs, which git branch, etc.<br />
* Phone numbers of persons responsible or directory entries specifying where such information can be found<br />
* Pointers to logging, monitoring, analytics, audit repositories, debugging tools, etc.<br />
* Etc.<br />
<br />
For example, while creating a Deployment, we can add a description like the one below:<br />
<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
annotations:<br />
description: Deployment based PoC dates 12 January 2018<br />
....<br />
....<br />
</pre><br />
<br />
We can look at annotations while describing an object:<br />
<br />
<pre><br />
$ kubectl describe deployment webserver<br />
Name: webserver<br />
Namespace: default<br />
CreationTimestamp: Fri, 12 Jan 2018 13:18:23 -0800<br />
Labels: app=webserver<br />
Annotations: deployment.kubernetes.io/revision=1<br />
description=Deployment based PoC dates 12 January 2018<br />
...<br />
...<br />
</pre><br />
<br />
==Jobs and CronJobs==<br />
<br />
===Jobs===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#what-is-a-job Job]'' creates one or more pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the Job itself is complete. Deleting a Job will cleanup the pods it created.<br />
<br />
A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).<br />
<br />
A Job can also be used to run multiple Pods in parallel.<br />
<br />
; Example<br />
<br />
* Below is an example ''Job'' config. It computes π to 2000 places and prints it out. It takes around 10 seconds to complete.<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
$ kubctl create -f ./job-pi.yml<br />
job "pi" created<br />
$ kubectl describe jobs/pi<br />
<pre><br />
Name: pi<br />
Namespace: default<br />
Selector: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Annotations: <none><br />
Parallelism: 1<br />
Completions: 1<br />
Start Time: Fri, 12 Jan 2018 13:25:23 -0800<br />
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Containers:<br />
pi:<br />
Image: perl<br />
Port: <none><br />
Command:<br />
perl<br />
-Mbignum=bpi<br />
-wle<br />
print bpi(2000)<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 8s job-controller Created pod: pi-rfvvw<br />
</pre><br />
<br />
* Get the result of the Job run (i.e., the value of π):<br />
$ pods=$(kubectl get pods --show-all --selector=job-name=pi --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
pi-rfvvw<br />
$ kubectl logs ${pods}<br />
3.1415926535897932384626433832795028841971693...<br />
<br />
===CronJobs===<br />
<br />
Support for creating ''Jobs'' at specified times/dates (i.e. cron) is available in Kubernetes 1.4. See [https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ here] for details.<br />
<br />
Below is an example ''CronJob''. Every minute, it runs a simple job to print current time and then echo a "hello" string:<br />
$ cat << EOF >cronjob.yml<br />
apiVersion: batch/v1beta1<br />
kind: CronJob<br />
metadata:<br />
name: hello<br />
spec:<br />
schedule: "*/1 * * * *"<br />
jobTemplate:<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: hello<br />
image: busybox<br />
args:<br />
- /bin/sh<br />
- -c<br />
- date; echo Hello from the Kubernetes cluster<br />
restartPolicy: OnFailure<br />
EOF<br />
<br />
$ kubectl create -f cronjob.yml<br />
cronjob "hello" created<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 <none> 11s<br />
<br />
$ kubectl get jobs --watch<br />
NAME DESIRED SUCCESSFUL AGE<br />
hello-1515793140 1 1 7s<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 22s 48s<br />
<br />
$ pods=$(kubectl get pods -a --selector=job-name=hello-1515793140 --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
hello-1515793140-plp8g<br />
<br />
$ kubectl logs $pods<br />
Fri Jan 12 21:39:07 UTC 2018<br />
Hello from the Kubernetes cluster<br />
<br />
* Cleanup<br />
$ kubectl delete cronjob hello<br />
<br />
==Quota Management==<br />
When there are many users sharing a given Kubernetes cluster, there is always a concern for fair usage. To address this concern, administrators can use the ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ ResourceQuota]'' object, which provides constraints that limit aggregate resource consumption per Namespace.<br />
<br />
We can have the following types of quotas per Namespace:<br />
<br />
* Compute Resource Quota: We can limit the total sum of compute resources (CPU, memory, etc.) that can be requested in a given Namespace.<br />
* Storage Resource Quota: We can limit the total sum of storage resources (PersistentVolumeClaims, requests.storage, etc.) that can be requested.<br />
* Object Count Quota: We can restrict the number of objects of a given type (pods, ConfigMaps, PersistentVolumeClaims, ReplicationControllers, Services, Secrets, etc.).<br />
<br />
==Daemon Sets==<br />
In some cases, like collecting monitoring data from all nodes, or running a storage daemon on all nodes, etc., we need a specific type of Pod running on all nodes at all times. A ''[https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ DaemonSet]'' is the object that allows us to do just that. <br />
<br />
Whenever a node is added to the cluster, a Pod from a given DaemonSet is created on it. When the node dies, the respective Pods are garbage collected. If a DaemonSet is deleted, all Pods it created are deleted as well.<br />
<br />
Example DaemonSet:<br />
<pre><br />
kind: DaemonSet<br />
apiVersion: apps/v1<br />
metadata:<br />
name: pause-ds<br />
spec:<br />
selector:<br />
matchLabels:<br />
quiet: "pod"<br />
template:<br />
metadata:<br />
labels:<br />
quiet: pod<br />
spec:<br />
tolerations:<br />
- key: node-role.kubernetes.io/master<br />
effect: NoSchedule<br />
containers:<br />
- name: pause-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Stateful Sets==<br />
The ''[https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ StatefulSet]'' controller is used for applications which require a unique identity, such as name, network identifications, strict ordering, etc. For example, MySQL cluster, etcd cluster.<br />
<br />
The StatefulSet controller provides identity and guaranteed ordering of deployment and scaling to Pods.<br />
<br />
Note: Before Kubernetes 1.5, the StatefulSet controller was referred to as ''PetSet''.<br />
<br />
==Role Based Access Control (RBAC)==<br />
''[https://kubernetes.io/docs/admin/authorization/rbac/ Role-based access control]'' (RBAC) is an authorization mechanism for managing permissions around Kubernetes resources.<br />
<br />
Using the RBAC API, we define a role which contains a set of additive permissions. Within a Namespace, a role is defined using the Role object. For a cluster-wide role, we need to use the ClusterRole object.<br />
<br />
Once the roles are defined, we can bind them to a user or a set of users using ''RoleBinding'' and ''ClusterRoleBinding''.<br />
<br />
===Using RBAC with minikube===<br />
<br />
* Start up minikube with RBAC support:<br />
$ minikube start --kubernetes-version=v1.9.0 --extra-config=apiserver.Authorization.Mode=RBAC<br />
<br />
* Setup RBAC:<br />
<pre><br />
$ cat rbac-cluster-role-binding.yml<br />
# kubectl create clusterrolebinding add-on-cluster-admin \<br />
# --clusterrole=cluster-admin --serviceaccount=kube-system:default<br />
#<br />
kind: ClusterRoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1alpha1<br />
metadata:<br />
name: kube-system-sa<br />
subjects:<br />
- kind: Group<br />
name: system:sericeaccounts:kube-system<br />
roleRef:<br />
kind: ClusterRole<br />
name: cluster-admin<br />
apiGroup: rbac.authorization.k8s.io<br />
</pre><br />
<br />
<pre><br />
$ cat rbac-setup.yml <br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: viewer<br />
namespace: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: admin<br />
namespace: rbac<br />
</pre><br />
<br />
* Create a Role Binding:<br />
<pre><br />
# kubectl create rolebinding reader-binding \<br />
# --clusterrole=reader \<br />
# --user=serviceaccount:reader \<br />
# --namespace:rbac<br />
#<br />
kind: RoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: rbac<br />
name: reader-binding<br />
roleRef:<br />
apiGroup: rbac.authorization.k8s.io<br />
kind: Role<br />
name: reader<br />
subjects:<br />
- apiGroup: rbac.authorization.k8s.io<br />
kind: ServiceAccount<br />
name: reader<br />
</pre><br />
<br />
* Create a Role:<br />
<pre><br />
$ cat rbac-role.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: default<br />
name: reader<br />
rules:<br />
- apiGroups: [""]<br />
resources: ["*"]<br />
verbs: ["get", "watch", "list"]<br />
</pre><br />
<br />
* Create an RBAC "core reader" Role with specific resources and "verbs" (i.e., the "core reader" role can "get"/"list"/etc. on specific resources (e.g., Pods, Jobs, Deployments, etc.):<br />
<pre><br />
$ cat rbac-role-core-reader.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: core-reader<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods<br />
- configmaps<br />
- secrets<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- jobs<br />
- deployments<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
</pre><br />
<br />
* "Gotchas":<br />
<pre><br />
$ cat rbac-gotcha-1.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-1<br />
rules:<br />
- nonResourceURLs:<br />
- /healthz<br />
verbs:<br />
- get<br />
- post<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- deployments<br />
verbs:<br />
- "*"<br />
</pre><br />
<pre><br />
$ cat rbac-gotcha-2.yml <br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-2<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- secrets<br />
verbs:<br />
- "*"<br />
resourceNames:<br />
- "my_secret"<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods/logs<br />
verbs:<br />
- "get"<br />
</pre><br />
<br />
; Privilege escalation<br />
* You cannot create a Role or ClusterRole that grants permissions you do not have.<br />
* You cannot create a RoleBinding or ClusterRoleBinding that binds to a Role with permissions you do not have (unless you have been explicitly given "bind" permission on the role).<br />
<br />
* Grant explicit bind access:<br />
<pre><br />
kind: ClusterRole<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: role-grantor<br />
rules:<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["rolebindings"]<br />
verbs: ["create"]<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["clusterroles"]<br />
verbs: ["bind"]<br />
resourceNames: ["admin", "edit", "view"]<br />
</pre><br />
<br />
===Testing RBAC permissions===<br />
<br />
* Example of RBAC not allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
no - Required "container.pods.create" permission.<br />
</pre><br />
<br />
* Example of RBAC allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
yes<br />
</pre><br />
<br />
* A more complex example:<br />
<pre><br />
$ kubectl auth can-i update deployments.apps \<br />
--subresource="scale" --as-group="$group" --as="$user" -n $ns<br />
</pre><br />
<br />
==Federation==<br />
With the ''[https://kubernetes.io/docs/concepts/cluster-administration/federation/ Kubernetes Cluster Federation]'' we can manage multiple Kubernetes clusters from a single control plane. We can sync resources across the clusters, and have cross cluster discovery. This allows us to do Deployments across regions and access them using a global DNS record.<br />
<br />
Federation is very useful when we want to build a hybrid solution, in which we can have one cluster running inside our private datacenter and another one on the public cloud. We can also assign weights for each cluster in the Federation, to distribute the load as per our choice.<br />
<br />
==Helm==<br />
To deploy an application, we use different Kubernetes manifests, such as Deployments, Services, Volume Claims, Ingress, etc. Sometimes, it can be tiresome to deploy them one by one. We can bundle all those manifests after templatizing them into a well-defined format, along with other metadata. Such a bundle is referred to as ''Chart''. These Charts can then be served via repositories, such as those that we have for rpm and deb packages. <br />
<br />
''[https://github.com/kubernetes/helm Helm]'' is a package manager (analogous to yum and apt) for Kubernetes, which can install/update/delete those Charts in the Kubernetes cluster.<br />
<br />
Helm has two components:<br />
<br />
* A client called helm, which runs on your user's workstation; and<br />
* A server called tiller, which runs inside your Kubernetes cluster.<br />
<br />
The client helm connects to the server tiller to manage Charts. Charts submitted for Kubernetes are available [https://github.com/kubernetes/charts here].<br />
<br />
==Monitoring and logging==<br />
In Kubernetes, we have to collect resource usage data by Pods, Services, nodes, etc, to understand the overall resource consumption and to take decisions for scaling a given application. Two popular Kubernetes monitoring solutions are Heapster and Prometheus.<br />
<br />
[https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/ Heapster] is a cluster-wide aggregator of monitoring and event data, which is natively supported on Kubernetes. <br />
<br />
[https://prometheus.io/ Prometheus], now part of [https://www.cncf.io/ CNCF] (Cloud Native Computing Foundation), can also be used to scrape the resource usage from different Kubernetes components and objects. Using its client libraries, we can also instrument the code of our application.<br />
<br />
Another important aspect for troubleshooting and debugging is Logging, in which we collect the logs from different components of a given system. In Kubernetes, we can collect logs from different cluster components, objects, nodes, etc. The most common way to collect the logs is using [https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/ Elasticsearch], which uses [https://www.fluentd.org/ fluentd] with custom configuration as an agent on the nodes. fluentd is an open source data collector, which is also part of CNCF.<br />
<br />
[https://github.com/google/cadvisor cAdvisor] is an open source container resource usage and performance analysis agent. It auto-discovers all containers on a node and collects CPU, memory, file system, and network usage statistics. It provides overall machine usage by analyzing the "root" container on the machine. It exposes a simple UI for local containers on port 4194.<br />
<br />
==Security==<br />
===Configure network policies===<br />
A ''[https://kubernetes.io/docs/concepts/services-networking/network-policies/ Network Policy]'' is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.<br />
<br />
''NetworkPolicy'' resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.<br />
<br />
* Specification of how groups of pods may communicate<br />
* Use labels to select pods and define rules<br />
* Implemented by the network plugin<br />
* Pods are non-isolated by default<br />
* Pods are isolated when a Network Policy selects them<br />
<br />
;Example NetworkPolicy<br />
Create a "default" isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods:<br />
<pre><br />
apiVersion: networking.k8s.io/v1<br />
kind: NetworkPolicy<br />
metadata:<br />
name: default-deny<br />
spec:<br />
podSelector: {}<br />
policyTypes:<br />
- Ingress<br />
</pre><br />
<br />
===TLS certificates for cluster components===<br />
Get [https://github.com/OpenVPN/easy-rsa easy-rsa].<br />
<br />
$ ./easyrsa init-pki<br />
$ MASTER_IP=10.100.1.2<br />
$ ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass<br />
<br />
$ cat rsa-request.sh<br />
<pre><br />
#!/bin/bash<br />
./easyrsa --subject-alt-name="IP:${MASTER_IP}," \<br />
"DNS:kubernetes," \<br />
"DNS:kubernetes.default," \<br />
"DNS:kubernetes.default.svc," \<br />
"DNS:kubernetes.default.svc.cluster," \<br />
"DNS:kubernetes.default.svc.cluster.local" \<br />
--days=10000 \<br />
build-server-full server nopass<br />
</pre><br />
<br />
<pre><br />
pki/<br />
├── ca.crt<br />
├── certs_by_serial<br />
│ └── F3A6F7D34BC84330E7375FA20C8441DF.pem<br />
├── index.txt<br />
├── index.txt.attr<br />
├── index.txt.old<br />
├── issued<br />
│ └── server.crt<br />
├── private<br />
│ ├── ca.key<br />
│ └── server.key<br />
├── reqs<br />
│ └── server.req<br />
├── serial<br />
└── serial.old<br />
</pre><br />
<br />
* Figure out what are the paths of the old TLS certs/keys with the following command:<br />
<pre><br />
$ ps aux | grep [a]piserver | sed -n -e 's/^.*\(kube-apiserver \)/\1/p' | tr ' ' '\n'<br />
kube-apiserver<br />
--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota<br />
--requestheader-extra-headers-prefix=X-Remote-Extra-<br />
--advertise-address=172.31.118.138<br />
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt<br />
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt<br />
--requestheader-username-headers=X-Remote-User<br />
--service-cluster-ip-range=10.96.0.0/12<br />
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key<br />
--secure-port=6443<br />
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key<br />
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname<br />
--requestheader-group-headers=X-Remote-Group<br />
--requestheader-allowed-names=front-proxy-client<br />
--service-account-key-file=/etc/kubernetes/pki/sa.pub<br />
--insecure-port=0<br />
--enable-bootstrap-token-auth=true<br />
--allow-privileged=true<br />
--client-ca-file=/etc/kubernetes/pki/ca.crt<br />
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt<br />
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key<br />
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt<br />
--authorization-mode=Node,RBAC<br />
--etcd-servers=http://127.0.0.1:2379<br />
</pre><br />
<br />
===Security Contexts===<br />
A ''[https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Security Context]'' defines privilege and access control settings for a Pod or Container. Security context settings include:<br />
<br />
* Discretionary Access Control: Permission to access an object, like a file, is based on user ID (UID) and group ID (GID).<br />
* Security Enhanced Linux (SELinux): Objects are assigned security labels.<br />
* Running as privileged or unprivileged.<br />
* Linux Capabilities: Give a process some privileges, but not all the privileges of the root user.<br />
* AppArmor: Use program profiles to restrict the capabilities of individual programs.<br />
* Seccomp: Limit a process's access to open file descriptors.<br />
* AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This boolean directly controls whether the <code>no_new_privs</code> flag gets set on the container process. <code>AllowPrivilegeEscalation</code> is true always when the container is: 1) run as Privileged; or 2) has <code>CAP_SYS_ADMIN</code>.<br />
<br />
; Example #1<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: security-context-demo<br />
spec:<br />
securityContext:<br />
runAsUser: 1000<br />
fsGroup: 2000<br />
volumes:<br />
- name: sec-ctx-vol<br />
emptyDir: {}<br />
containers:<br />
- name: sec-ctx-demo<br />
image: gcr.io/google-samples/node-hello:1.0<br />
volumeMounts:<br />
- name: sec-ctx-vol<br />
mountPath: /data/demo<br />
securityContext:<br />
allowPrivilegeEscalation: false<br />
</pre><br />
<br />
==Taints and tolerations==<br />
[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature Node affinity] is a property of pods that ''attracts'' them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite – they allow a node to ''repel'' a set of pods.<br />
<br />
[https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ Taints and tolerations] work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks the node such that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.<br />
<br />
==Remove a node from a cluster==<br />
<br />
* On the k8s Master Node:<br />
k8s-master> $ kubectl drain k8s-worker-02 --ignore-daemonsets<br />
<br />
* On the k8s Worker Node (the one you wish to remove from the cluster):<br />
k8s-worker-02> $ kubeadm reset<br />
[preflight] Running pre-flight checks.<br />
[reset] Stopping the kubelet service.<br />
[reset] Unmounting mounted directories in "/var/lib/kubelet"<br />
[reset] Removing kubernetes-managed containers.<br />
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.<br />
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]<br />
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]<br />
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]<br />
<br />
==Networking==<br />
<br />
; Useful network ranges<br />
* Choose ranges for the Pods and Service CIDR blocks<br />
* Generally, any of the RFC-1918 ranges work well<br />
** 10.0.0.0/8<br />
** 172.0.0.0/11<br />
** 192.168.0.0/16<br />
<br />
Every Pod can communicate directly with every other Pod<br />
<br />
;K8s Node<br />
* A general purpose compute that has at least one interface<br />
** The host OS will have a real-world IP for accessing the machine<br />
** K8s Pods are given ''virtual'' interfaces connected to an internal<br />
** Each nodes has a running network stack<br />
* Kube-proxy runs in the OS to control IPtables for:<br />
** Services<br />
** NodePorts<br />
<br />
;Networking substrate<br />
* Most k8s network stacks allocate subnets for each node<br />
** The network stack is responsible for arbitration of subnets and IPs<br />
** The network stack is also responsible for moving packets around the network<br />
* Pods have a unique, routable IP on the Pod CIDR block<br />
** The CIDR block is ''not'' accessed from outside the k8s cluster<br />
** The magic of IPtables allows the Pods to make outgoing connections<br />
* Ensure that k8s has the correct Pods and Service CIDR blocks<br />
<br />
The Pod network is not seen on the physical network (i.e., it is encapsulated; you will not be able to use <code>tcpdump</code> on it from the physical network)<br />
<br />
;Making the setup easier &mdash; CNI<br />
* Use the Container Network Interface (CNI)<br />
* Relieves k8s from having to have a specific network configuration<br />
* It is activated by supplying <code>--network-plugin=cni, --cni-conf-dir, --cni-bin-dir</code> to kubelet<br />
** Typical configuration directory: <code>/etc/cni/net.d</code><br />
** Typical bin directory: <code>/opt/cni/bin</code><br />
* Allows for multiple backends to be used: linux-bridge, macvlan, ipvlan, Open vSwitch, network stacks<br />
<br />
;Kubernetes services<br />
<br />
* Services are crucial for service discovery and distributing traffic to Pods<br />
* Services act as simple internal load balancers with VIPs<br />
** No access controls<br />
** No traffic controls<br />
* IPtables magically route to virtual IPs<br />
* Internally, Services are used as inter-Pod service discovery<br />
** Kube-DNS publishes DNS record (i.e., <code>nginx.default.svc.cluster.local</code>)<br />
* Services can be exposed in three different ways:<br />
*# ClusterIP<br />
*# LoadBalancer<br />
*# NodePort<br />
<br />
; kube-proxy<br />
* Each k8s node in the cluster runs a kube-proxy<br />
* Two modes: userspace and iptables<br />
** iptables is much more performant (userspace should no longer be used<br />
* kube-proxy has the task of configuring iptables to expose each k8s service<br />
** iptables rules distributes traffic randomly across the endpoints<br />
<br />
===Network providers===<br />
<br />
In order for a CNI plugin to be considered a "[https://kubernetes.io/docs/concepts/cluster-administration/networking/ Network Provider]", it must provide (at the very least) the following:<br />
# All containers can communicate with all other containers without NAT<br />
# All nodes can communicate with all containers (and ''vice versa'') without NAT<br />
# The IP that a containers sees itself as is the same IP that others see it as<br />
<br />
==Linux namespaces==<br />
<br />
* Control group (cgroups)<br />
* Union File Systems<br />
<br />
==Kubernetes inbound node port requirements==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Protocol<br />
!Direction<br />
!Port range<br />
!Purpose<br />
!Used by<br />
!Notes<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Master node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 6443<sup>*</sup> || Kubernetes API server || All<br />
|-<br />
| TCP || Inbound || 2379-2380 || etcd server client API || kube-apiserver, etcd<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10251 || kube-scheduler || Self<br />
|-<br />
| TCP || Inbound || 10252 || kube-controller-manager || Self<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Worker node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 30000-32767 || NodePort Services<sup>**</sup> || All<br />
|}<br />
</div><br />
<br clear="all"/><br />
<sup>**</sup> Default port range for NodePort Services.<br />
<br />
Any port numbers marked with <sup>*</sup> are overridable, so you will need to ensure any custom ports you provide are also open.<br />
<br />
Although etcd ports are included in master nodes, you can also host your own etcd cluster externally or on custom ports.<br />
<br />
The pod network plugin you use (see below) may also require certain ports to be open. Since this differs with each pod network plugin, please see the documentation for the plugins about what port(s) those need.<br />
<br />
==API versions==<br />
<br />
Below is a table showing which value to use for the <code>apiVersion</code> key for a given k8s primitive (note: all values are for k8s 1.8.0, unless otherwise specified):<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Primitive<br />
!apiVersion<br />
|-<br />
| Pod || v1<br />
|-<br />
| Deployment || apps/v1beta2<br />
|-<br />
| Service || v1<br />
|-<br />
| Job || batch/v1<br />
|-<br />
| Ingress || extensions/v1beta1<br />
|-<br />
| CronJob || batch/v1beta1<br />
|-<br />
| ConfigMap || v1<br />
|-<br />
| DaemonSet || apps/v1<br />
|-<br />
| ReplicaSet || apps/v1beta2<br />
|-<br />
| NetworkPolicy || networking.k8s.io/v1<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
You can get a list of all of the API versions supported by your k8s install with:<br />
$ kubectl api-versions<br />
<br />
==Troubleshooting==<br />
<br />
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns<br />
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
* If your container has previously crashed, you can access the previous container’s crash log with:<br />
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}<br />
<br />
==Miscellaneous commands==<br />
<br />
* Simple workflow (not a best practice; use manifest files {YAML} instead):<br />
$ kubectl run nginx --image=nginx:1.10.0<br />
$ kubectl expose deployment nginx --port 80 --type LoadBalancer<br />
$ kubectl get services # <- wait until public IP is assigned<br />
$ kubectl scale deployment nginx --replicas 3<br />
<br />
* Create an Nginx deployment with three replicas without using YAML:<br />
$ kubectl run nginx --image=nginx --replicas=3<br />
<br />
* Take a node out of service for maintenance:<br />
$ kubectl cordon k8s.worker1.local<br />
$ kubectl drain k8s.worker1.local --ignore-daemonsets<br />
<br />
* Return a given node to a service after cordoning and "draining" it (e.g., after a maintenance):<br />
$ kubectl uncordon k8s.worker1.local<br />
<br />
* Get a list of nodes in a format useful for scripting:<br />
$ kubectl get nodes -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get nodes -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get nodes -o json | jq -crM '.items[].metadata.name'<br />
#~OR~ (if using an older version of `jq`)<br />
$ kubectl get nodes -o json | jq '.items[].metadata.name' | tr -d '"'<br />
<br />
* Label a list of nodes:<br />
<pre><br />
for node in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do<br />
kubectl label nodes ${node} instancetype=ondemand;<br />
kubectl label nodes ${node} "example.io/node-lifecycle"=od;<br />
done<br />
</pre><br />
<br />
* Delete a bunch of Pods in "Evicted" state:<br />
$ kubectl get pod -n develop | awk '/Evicted/{print $1}' | xargs kubectl delete pod -n develop<br />
#~OR~<br />
$ kubectl get po -a --all-namespaces -o json | \<br />
jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | <br />
"kubectl delete po \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c<br />
<br />
* Get a random node:<br />
$ NODES=($(kubectl get nodes -o json | jq -crM '.items[].metadata.name'))<br />
$ NUMNODES=${#NODES[@]}<br />
$ echo ${NODES[$[ $RANDOM % $NUMNODES ]]}<br />
<br />
* Get all recent events sorted by their timestamps:<br />
$ kubectl get events --sort-by='.metadata.creationTimestamp'<br />
<br />
* Get a list of all Pods in the default namespace sorted by Node:<br />
$ kubectl get po -o wide --sort-by=.spec.nodeName<br />
<br />
* Get the cluster IP for a service named "foo":<br />
$ kubectl get svc/foo -o jsonpath='{.spec.clusterIP}'<br />
<br />
* List all Services in a cluster and their node ports:<br />
$ kubectl get --all-namespaces svc -o json |\<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Print just the Pod names of those Pods with the label <code>app=nginx</code>:<br />
$ kubectl get --no-headers=true pods -l app=nginx -o custom-columns=:metadata.name<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get --no-headers=true pods -l app=nginx -o name | awk -F "/" '{print $2}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o json | jq -crM '.items [] | .metadata.name'<br />
<br />
* Get a list of all container images used by the Pods in your default namespace:<br />
$ kubectl get pods -o go-template --template='<nowiki>{{range .items}}{{racontainers}}{{.image}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get pods -o go-template="<nowiki>{{range .items}}{{range .spec.containers}}{{.image}}|{{end}}{{end}}</nowiki>" | tr '|' '\n'<br />
<br />
* Get a list of Pods sorted by Node name:<br />
$ kubectl get po -o json | jq -r '.items | sort_by(.spec.nodeName)[] | [.spec.nodeName,.metadata.name] | @tsv'<br />
<br />
* List all Services in a cluster with their endpoints:<br />
$ kubectl get --all-namespaces svc -o json | \<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Get status transitions of each Pod in the default namespace:<br />
$ export tpl='{range .items[*]}{"\n"}{@.metadata.name}{range @.status.conditions[*]}{"\t"}{@.type}={@.status}{end}{end}'<br />
$ kubectl get po -o jsonpath="${tpl}" && echo<br />
<br />
cheddar-cheese-d6d6587c7-4bgcz Initialized=True Ready=True PodScheduled=True<br />
echoserver-55f97d5bff-pdv65 Initialized=True Ready=True PodScheduled=True<br />
stilton-cheese-6d64cbc79-g7h4w Initialized=True Ready=True PodScheduled=True<br />
<br />
* Get a list of all Pods in status "Failed":<br />
$ kubectl get pods -o go-template='<nowiki>{{range .items}}{{if eq .status.phase "Failed"}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get all users in all namespaces:<br />
$ kubectl get rolebindings --all-namepsaces -o go-template \<br />
--template='<nowiki>{{range .items}}{{println}}{{.metadata.namespace}}={{range .subjects}}{{if eq .kind "User"}}{{.name}} {{end}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get the memory limit assigned to a container in a given Pod:<br />
<pre><br />
$ kubectl get pod example-pod-name -n default \<br />
-o jsonpath="{.spec.containers[*].resources.limits}" <br />
</pre><br />
<br />
* Get a Bash prompt of your current context and namespace:<br />
<pre><br />
NORMAL="\[\033[00m\]"<br />
BLUE="\[\033[01;34m\]"<br />
RED="\[\e[1;31m\]"<br />
YELLOW="\[\e[1;33m\]"<br />
GREEN="\[\e[1;32m\]"<br />
PS1_WORKDIR="\w"<br />
PS1_HOSTNAME="\h"<br />
PS1_USER="\u"<br />
<br />
__kube_ps1()<br />
{<br />
CONTEXT=$(kubectl config current-context)<br />
NAMESPACE=$(kubectl config view -o jsonpath="{.contexts[?(@.name==\"${CONTEXT}\")].context.namespace}")<br />
if [ -z "$NAMESPACE"]; then<br />
NAMESPACE="default"<br />
fi<br />
if [ -n "$CONTEXT" ]; then<br />
case "$CONTEXT" in<br />
*prod*)<br />
echo "${RED}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*test*)<br />
echo "${YELLOW}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*)<br />
echo "${GREEN}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
esac<br />
fi<br />
}<br />
<br />
export PROMPT_COMMAND='PS1="${GREEN}${PS1_USER}@${PS1_HOSTNAME}${NORMAL}:$(__kube_ps1)${BLUE}${PS1_WORKDIR}${NORMAL}\$ "'<br />
</pre><br />
<br />
===Client configuration===<br />
<br />
* Setup autocomplete in bash; bash-completion package should be installed first:<br />
$ source <(kubectl completion bash)<br />
<br />
* View Kubernetes config:<br />
$ kubectl config view<br />
<br />
* View specific config items by JSON path:<br />
$ kubectl config view -o jsonpath='{.users[?(@.name == "k8s")].user.password}'<br />
<br />
* Set credentials for foo.kuberntes.com:<br />
$ kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword<br />
<br />
===Viewing / finding resources===<br />
<br />
* List all services in the namespace:<br />
$ kubectl get services<br />
<br />
* List all pods in all namespaces in wide format:<br />
$ kubectl get pods -o wide --all-namespaces<br />
<br />
* List all pods in JSON (or YAML) format:<br />
$ kubectl get pods -o json<br />
<br />
* Describe resource details (node, pod, svc):<br />
$ kubectl describe nodes my-node<br />
<br />
* List services sorted by name:<br />
$ kubectl get services --sort-by=.metadata.name<br />
<br />
* List pods sorted by restart count:<br />
$ kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'<br />
<br />
* Rolling update pods for frontend-v1:<br />
$ kubectl rolling-update frontend-v1 -f frontend-v2.json<br />
<br />
* Scale a ReplicaSet named "foo" to 3:<br />
$ kubectl scale --replicas=3 rs/foo<br />
<br />
* Scale a resource specified in "foo.yaml" to 3:<br />
$ kubectl scale --replicas=3 -f foo.yaml<br />
<br />
* Execute a command in every pod / replica:<br />
$ for i in 0 1; do kubectl exec foo-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done<br />
<br />
* Get a list of ''all'' container IDs running in ''all'' Pods in ''all'' namespaces for a given Kubernetes cluster:<br />
<pre><br />
$ kubectl get pods --all-namespaces \<br />
-o jsonpath='{range .items[*]}{"pod: "}{.metadata.name}{"\n"}{range .status.containerStatuses[*]}{"\tname: "}{.containerID}{"\n\timage: "}{.image}{"\n"}{end}'<br />
<br />
# Example output:<br />
pod: cert-manager-848f547974-8m2k6<br />
name: containerd://358415173310a528a36ca2c19cdc3319f8fd96634c09957977767333b104d387<br />
image: quay.io/jetstack/cert-manager-controller:v1.5.3<br />
</pre><br />
<br />
===Manage resources===<br />
<br />
* Get documentation for pod or service:<br />
$ kubectl explain pods,svc<br />
<br />
* Create resource(s) like pods, services or DaemonSets:<br />
$ kubectl create -f ./my-manifest.yaml<br />
<br />
* Apply a configuration to a resource:<br />
$ kubectl apply -f ./my-manifest.yaml<br />
<br />
* Start a single instance of Nginx:<br />
$ kubectl run nginx --image=nginx<br />
<br />
* Create a secret with several keys:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysecret<br />
type: Opaque<br />
data:<br />
password: $(echo "s33msi4" | base64)<br />
username: $(echo "jane"| base64)<br />
EOF<br />
</pre><br />
<br />
* Delete a resource:<br />
$ kubectl delete -f ./my-manifest.yaml<br />
<br />
===Monitoring and logging===<br />
<br />
* Deploy Heapster from Github repository:<br />
$ kubectl create -f deploy/kube-config/standalone/<br />
<br />
* Show metrics for nodes:<br />
$ kubectl top node<br />
<br />
* Show metrics for pods:<br />
$ kubectl top pod<br />
<br />
* Show metrics for a given pod and its containers:<br />
$ kubectl top pod pod_name --containers<br />
<br />
* Dump pod logs (STDOUT):<br />
$ kubectl logs pod_name<br />
<br />
* Stream pod container logs (STDOUT, multi-container case):<br />
$ kubectl logs -f pod_name -c my-container<br />
<br />
<!-- TODO: https://gist.github.com/so0k/42313dbb3b547a0f51a547bb968696ba --><br />
<br />
===Run tcpdump on containers running in Pods===<br />
<br />
* Find which node/host/IP the Pod in question is running on and also get the container ID:<br />
<pre><br />
$ kubectl describe pod busybox | grep -E "^Node:|Container ID: "<br />
Node: worker2/10.39.32.122<br />
Container ID: docker://a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03<br />
<br />
#~OR~<br />
<br />
$ containerID=$(kubectl get po busybox -o jsonpath='{.status.containerStatuses[*].containerID}' | sed -e 's|docker://||g')<br />
$ hostIP=$(kubectl get po busybox -o jsonpath='{.status.hostIP}')<br />
</pre><br />
<br />
Log into the node/host running the Pod in question and then perform the following steps.<br />
<br />
* Get the virtual interface ID (note it will depend on which Container Network Interface you are using {e.g., veth, cali, etc.}):<br />
<pre><br />
$ docker exec a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03 /bin/sh -c 'cat /sys/class/net/eth0/iflink'<br />
12<br />
<br />
# List all non-virtual interfaces:<br />
$ for iface in $(find /sys/class/net/ -type l ! -lname '*/devices/virtual/net/*' -printf '%f '); do echo "$iface is not virtual"; done<br />
ens192 is not virtual<br />
<br />
# Check if we are using veth or cali or something else:<br />
$ ls -1 /sys/class/net/ | awk '!/docker|lo|ens/{print substr($0,0,4);exit}'<br />
cali<br />
<br />
$ for i in /sys/class/net/veth*/ifindex; do grep -l 12 $i; done<br />
#~OR~<br />
$ for i in /sys/class/net/cali*/ifindex; do grep -l 12 $i; done<br />
/sys/class/net/cali12d4a061371/ifindex<br />
#~OR~<br />
echo $(find /sys/class/net/ -type l -lname '*/devices/virtual/net/*' -exec grep -l 12 {}/ifindex \;) | awk -F'/' '{print $5}'<br />
cali12d4a061371<br />
#~OR~<br />
$ ip link | grep ^12<br />
12: cali12d4a061371@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default<br />
#~OR~<br />
$ ip link | awk '/^12/{print $2}' | awk -F'@' '{print $1}'<br />
cali12d4a061371<br />
</pre><br />
<br />
* Now run [[tcpdump]] on this virtual interface (note: make sure you are running tcpdump on the ''same'' host as the Pod is running on):<br />
$ sudo tcpdump -i cali12d4a061371<br />
<br />
; Self-signed certificates<br />
<br />
If you are using the latest version of <code>kubectl</code> and are running it against a k8s cluster built with a self-signed cert, you can get around any "x509" errors with:<br />
$ export GODEBUG=x509ignoreCN=0<br />
<br />
===API resources===<br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ time for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
<br />
real 1m20.014s<br />
user 0m52.732s<br />
sys 0m17.751s<br />
</pre><br />
<br />
* Note: if you just want a version for a single/given kind:<br />
<pre><br />
$ kubectl explain deploy | head -2<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
</pre><br />
<br />
===kubectl-neat===<br />
<br />
: See: https://github.com/itaysk/kubectl-neat<br />
: See: [[jq]]<br />
<br />
* To easily copy a certificate secret from one namespace to another namespace run:<br />
<pre><br />
$ SOURCE_NAMESPACE=<update-me><br />
$ DESTINATION_NAMESPACE=<update-me><br />
$ kubectl -n ${SOURCE_NAMESPACE} get secret kafka-client-credentials -o json |\<br />
kubectl neat |\<br />
jq 'del(.metadata["namespace"])' |\<br />
kubectl apply -n ${DESTINATION_NAMESPACE} -f -<br />
</pre><br />
<br />
===Get CPU/memory for each node===<br />
<br />
<pre><br />
for node in $(kubectl get nodes -o=jsonpath='{.items[*].metadata.name}'); do<br />
echo "NODE: ${node}"; kubectl describe node ${node} | grep -E '^ cpu |^ memory ';<br />
done<br />
</pre><br />
<br />
===Get vCPU capacity===<br />
<br />
<pre><br />
$ kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"} \<br />
{.status.capacity.cpu}{\"\n\"}{end}"<br />
</pre><br />
<br />
==Miscellaneous examples==<br />
<br />
* Create a Namespace:<br />
<pre><br />
kind: Namespace<br />
apiVersion: v1<br />
metadata:<br />
name: my-namespace<br />
</pre><br />
<br />
; Testing the load balancing capabilities of a Service<br />
<br />
* Create a Deployment with two replicas of Nginx (i.e., 2 x Pods with identical containers, configuration, etc.):<br />
<pre><br />
$ cat << EOF >nginx-deploy.yml<br />
kind: Deployment<br />
apiVersion: apps/v1<br />
metadata:<br />
name: nginx-deploy<br />
spec:<br />
replicas: 2<br />
strategy:<br />
rollingUpdate:<br />
maxSurge: 1<br />
maxUnavailable: 0<br />
type: RollingUpdate<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
$ kubectl create --validate -f nginx-deploy.yml<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deploy 2 2 2 2 1h<br />
$ kubectl get po<br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deploy-8d68fb6cc-bspt8 1/1 Running 1 1h<br />
nginx-deploy-8d68fb6cc-qdvhg 1/1 Running 1 1h<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: nginx-svc<br />
spec:<br />
ports:<br />
- port: 8080<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
<br />
$ kubectl get svc/nginx-svc<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
nginx-svc ClusterIP 10.101.133.100 <none> 8080/TCP 1h<br />
</pre><br />
<br />
* Overwrite the default index.html file (note: This is ''not'' persistent. The original default index.html file will be restored if the Pod fails and the Deployment brings up a new Pod and/or if you modify your Deployment {e.g., upgrade Nginx}. This is just for demonstration purposes):<br />
$ kubectl exec -it nginx-8d68fb6cc-bspt8 -- sh -c 'echo "pod-01" > /usr/share/nginx/html/index.html'<br />
$ kubectl exec -it nginx-8d68fb6cc-qdvhg -- sh -c 'echo "pod-02" > /usr/share/nginx/html/index.html'<br />
<br />
* Get the HTTP status code and server value from the header of a request to the Service endpoint:<br />
$ curl -Is 10.101.133.100:8080 | grep -E '^HTTP|Server'<br />
HTTP/1.1 200 OK<br />
Server: nginx/1.7.9 # <- This is the version of Nginx we defined in the Deployment above<br />
<br />
* Perform a GET request on the Service endpoint (ClusterIP+Port):<br />
<pre><br />
$ for i in $(seq 1 10); do curl -s 10.101.133.100:8080; done<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-02<br />
</pre><br />
Sometimes <code>pod-01</code> responded; sometimes <code>pod-02</code> responded.<br />
<br />
* Perform a GET on the Service endpoint 10,000 times and sum up which Pod responded for each request:<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
5018 pod-01 # <- number of times pod-01 responded to the request<br />
4982 pod-02 # <- number of times pod-02 responded to the request<br />
<br />
real 1m0.639s<br />
user 0m29.808s<br />
sys 0m11.692s<br />
</pre><br />
<br />
$ awk 'BEGIN{print 5018/(5018+4982);}'<br />
0.5018<br />
$ awk 'BEGIN{print 4982/(5018+4982);}'<br />
0.4982<br />
<br />
So, our Service is "load balancing" our two Nginx Pods in a roughly 50/50 fashion.<br />
<br />
In order to double-check that the Service is randomly selecting a Pod to serve the GET request, let's scale our Deployment from 2 to 3 replicas:<br />
$ kubectl scale deploy/nginx-deploy --replicas=3<br />
<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
3392 pod-01<br />
3335 pod-02<br />
3273 pod-03<br />
<br />
real 0m59.537s<br />
user 0m25.932s<br />
sys 0m9.656s<br />
</pre><br />
$ awk 'BEGIN{print 3392/(3392+3335+3273);}'<br />
0.3392<br />
$ awk 'BEGIN{print 3335/(3392+3335+3273);}'<br />
0.3335<br />
$ awk 'BEGIN{print 3273/(3392+3335+3273);}'<br />
0.3273<br />
<br />
Sure enough. Each of the 3 Pods is serving the GET request roughly 33% of the time.<br />
<br />
; Query selections<br />
<br />
* Create a "query selection" file:<br />
<pre><br />
$ cat << EOF >cluster-nodes-health.txt<br />
Name Kernel InternalIP MemoryPressure DiskPressure PIDPressure Ready<br />
.metadata.name .status.nodeInfo.kernelVersion .status.addresses[0].address .status.conditions[0].status .status.conditions[1].status .status.conditions[2].status .status.conditions[3].status<br />
EOF<br />
</pre><br />
<br />
* Use the above "query selection" file:<br />
<pre><br />
$ kubectl get nodes -o custom-columns-file=cluster-nodes-health.txt<br />
Name Kernel InternalIP MemoryPressure DiskPressure PIDPressure Ready<br />
10.10.10.152 5.4.0-1084-aws 10.10.10.152 False False False False<br />
10.10.11.12 5.4.0-1092-aws 10.10.11.12 False False False False<br />
10.10.12.22 5.4.0-1039-aws 10.10.12.22 False False False False<br />
</pre><br />
<br />
==Example YAML files==<br />
<br />
* Basic Pod using busybox:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod using busybox, which also prints out environment variables (including the ones defined in the YAML):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: env-dump<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- env<br />
env:<br />
- name: USERNAME<br />
value: "Christoph"<br />
- name: PASSWORD<br />
value: "mypassword"<br />
</pre><br />
$ kubectl logs env-dump<br />
...<br />
PASSWORD=mypassword<br />
USERNAME=Christoph<br />
...<br />
<br />
* Basic Pod using alpine:<br />
<pre><br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: alpine<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: alpine<br />
image: alpine<br />
command:<br />
- /bin/sh<br />
- "-c"<br />
- "sleep 60m"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod running Nginx:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx-pod<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create a Job that calculates pi up to 2000 decimal places:<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
<br />
* Create a Deployment with two replicas of Nginx running:<br />
<pre><br />
apiVersion: apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
spec:<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
replicas: 2 <br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.9.1<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Create a basic Persistent Volume, which uses NFS:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 172.31.119.58<br />
readOnly: false<br />
</pre><br />
<br />
* Create a Persistent Volume Claim against the above PV:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
</pre><br />
<br />
* Create a Pod using a customer scheduler (i.e., not the default one):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: my-custom-scheduler<br />
annotations:<br />
scheduledBy: custom-scheduler<br />
spec:<br />
schedulerName: custom-scheduler<br />
containers:<br />
- name: pod-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Install k8s cluster manually in the Cloud==<br />
<br />
''Note: For this example, I will be using AWS and I will assume you already have 3 x EC2 instances running CentOS 7 in your AWS account. I will install Kubernetes 1.10.x.''<br />
<br />
* Disable services not supported (yet) by Kubernetes:<br />
$ sudo setenforce 0 # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
<br />
$ sudo systemctl stop firewalld<br />
$ sudo systemctl mask firewalld<br />
$ sudo yum install -y iptables-services<br />
<br />
* Disable swap:<br />
$ sudo swapoff -a # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo vi /etc/fstab # comment out swap line<br />
$ sudo mount -a<br />
<br />
* Make sure routed traffic does not bypass iptables:<br />
$ cat << EOF > /etc/sysctl.d/k8s.conf<br />
net.bridge.bridge-nf-call-ip6tables = 1<br />
net.bridge.bridge-nf-call-iptables = 1<br />
EOF<br />
$ sudo sysctl --system<br />
<br />
* Install <code>kubelet</code>, <code>kubeadm</code>, and <code>kubectl</code> on '''''all''''' nodes in your cluster (both Master and Worker nodes):<br />
<pre><br />
$ cat << EOF > /etc/yum.repos.d/kubernetes.repo<br />
[kubernetes]<br />
name=Kubernetes<br />
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch<br />
enabled=1<br />
gpgcheck=1<br />
repo_gpgcheck=1<br />
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg<br />
EOF<br />
</pre><br />
<br />
$ sudo yum install -y kubelet kubeadm kubectl<br />
$ sudo systemctl enable kubelet && sudo systemctl start kubelet<br />
<br />
* Configure cgroup driver used by kubelet on '''''all''''' nodes (both Master and Worker nodes):<br />
<br />
Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config:<br />
<br />
$ docker info | grep -i cgroup<br />
$ grep -i cgroup /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
If the Docker cgroup driver and the kubelet config do not match, change the kubelet config to match the Docker cgroup driver. The flag you need to change is <code>--cgroup-driver</code>. If it is already set, you can update like so:<br />
<br />
$ sudo sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
Otherwise, you will need to open the systemd file and add the flag to an existing environment line.<br />
<br />
Then restart kubelet:<br />
<br />
$ sudo systemctl daemon-reload<br />
$ sudo systemctl restart kubelet<br />
<br />
* Run <code>kubeadm</code> on Master node:<br />
<br />
K8s requires a pod network to function. We are going to use Flannel, so we need to pass in a flag to the deployment script so k8s knows how to configure itself:<br />
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16<br />
<br />
Note: This command might take a fair amount of time to complete.<br />
<br />
Once it has completed, make note of the "<code>join</code>" command output by <code>kubeadm init</code> that looks something like the following ('''DO NOT RUN THE FOLLOWING COMMAND YET!'''):<br />
# kubeadm join --token --discovery-token-ca-cert-hash sha256:<br />
<br />
You will run that command on the other non-master nodes (aka the "Worker Nodes") to allow them to join the cluster. However, '''do not''' run that command on the worker nodes until you have completed all of the following steps.<br />
<br />
* Create a directory:<br />
$ mkdir -p $HOME/.kube<br />
<br />
* Copy the configuration files to a location usable by the local user:<br />
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config <br />
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config<br />
<br />
* In order for your pods to communicate with one another, you will need to install pod networking. We are going to use Flannel for our Container Network Interface (CNI) because it is easy to install and reliable. <br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</nowiki><br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml</nowiki><br />
<br />
* Make sure everything is coming up properly:<br />
$ kubectl get pods --all-namespaces --watch<br />
Once the <code>kube-dns-xxxx</code> containers are up (i.e., in Status "Running"), your cluster is ready to accept worker nodes.<br />
<br />
* On each of the Worker nodes, run the <code>sudo kubeadm join ...</code> command that <code>kubeadm init</code> created for you (see above).<br />
<br />
* On the Master Node, run the following command:<br />
$ kubectl get nodes --watch<br />
Once the Status of the Worker Nodes returns "Ready", your k8s cluster is ready to use.<br />
<br />
* Example output of successful Kubernetes cluster:<br />
<pre><br />
$ kubectl get nodes<br />
NAME STATUS ROLES AGE VERSION<br />
k8s-01 Ready master 13m v1.10.1<br />
k8s-02 Ready <none> 12m v1.10.1<br />
k8s-03 Ready <none> 12m v1.10.1<br />
</pre><br />
<br />
That's it! You are now ready to start deploying Pods, Deployments, Services, etc. in your Kubernetes cluster!<br />
<br />
==Bash completion==<br />
''Note: The following only works on newer versions. I have tested that this works on version 1.9.1.''<br />
<br />
Add the following line to your <code>~/.bashrc</code> file:<br />
source <(kubectl completion bash)<br />
<br />
==Kubectl plugins==<br />
<br />
SEE: [https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/ Extend kubectl with plugins] for details.<br />
<br />
: FEATURE STATE: Kubernetes v1.11 (alpha)<br />
: FEATURE STATE: Kubernetes v1.15 (stable)<br />
<br />
This section shows you how to install and write extensions for <code>kubectl</code>. Usually called "plugins" or "binary extensions", this feature allows you to extend the default set of commands available in <code>kubectl</code> by adding new sub-commands to perform new tasks and extend the set of features available in the main distribution of <code>kubectl</code>.<br />
<br />
Get code [https://github.com/kubernetes/kubernetes/tree/master/pkg/kubectl/plugins/examples from here].<br />
<br />
<pre><br />
.kube/<br />
└── plugins<br />
└── aging<br />
├── aging.rb<br />
└── plugin.yaml<br />
</pre><br />
<br />
$ chmod 0700 .kube/plugins/aging/aging.rb<br />
<br />
* See options:<br />
<pre><br />
$ kubectl plugin aging --help<br />
Aging shows pods from the current namespace by age.<br />
<br />
Usage:<br />
kubectl plugin aging [flags] [options]<br />
</pre><br />
<br />
* Usage:<br />
<pre><br />
$ kubectl plugin aging<br />
The Magnificent Aging Plugin.<br />
<br />
nginx-deployment-67594d6bf6-5t8m9: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-6kw9j: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-d8dwt: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
</pre><br />
<br />
==Local Kubernetes==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="6" bgcolor="#EFEFEF" | '''Local Kubernetes Comparisons'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Feature<br />
!kind<br />
!k3d<br />
!minikube<br />
!Docker Desktop<br />
!Rancher Desktop<br />
|- <br />
| Free || yes || yes || yes || Personal Small Business* || yes<br />
|--bgcolor="#eeeeee"<br />
| Install || easy || easy || easy || easy || medium (you may encounter odd scenarios)<br />
|-<br />
| Ease of Use || medium || medium || medium || easy || easy<br />
|--bgcolor="#eeeeee"<br />
| Stability || stable || stable || stable || stable || stable<br />
|-<br />
| Cross-platform || yes || yes || yes || yes || yes<br />
|--bgcolor="#eeeeee"<br />
| CI Usage || yes || yes || yes || no || no<br />
|-<br />
| Multiple clusters || yes || yes || yes || no || no<br />
|--bgcolor="#eeeeee"<br />
| Podman support || yes || yes || yes || no || no<br />
|-<br />
| Host volumes mount support || yes || yes || yes (with some performance limitations) || yes || yes (only pre-defined paths)<br />
|--bgcolor="#eeeeee"<br />
| Kubernetes service port-forwarding/mapping || yes || yes || yes || yes || yes<br />
|-<br />
| Pull-through Docker mirror/proxy || yes || yes || no || yes (can reference locally available images) || yes (can reference locally available images)<br />
|--bgcolor="#eeeeee"<br />
| Custom CNI || yes (ex: calico) || yes (ex: flannel) || yes (ex: calico) || no || no<br />
|-<br />
| Features Gates || yes || yes || yes || yes (but not natively; requires hacky setup) || yes (but not natively; requires hacky setup)<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
[https://bmiguel-teixeira.medium.com/local-kubernetes-the-one-above-all-3aedbeb5f3f6 Source]<br />
<br />
==See also==<br />
* [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
* [[Kubernetes/GKE|Google Kubernetes Engine]] (GKE)<br />
* [[Kubernetes/AWS|Kubernetes on AWS]] (EKS)<br />
* [[Kubeless]]<br />
* [[Helm]]<br />
<br />
==External links==<br />
* [http://kubernetes.io/ Official website]<br />
* [https://github.com/kubernetes/kubernetes Kubernetes code] &mdash; via GitHub<br />
===Playgrounds===<br />
* [https://www.katacoda.com/courses/kubernetes/playground Kubernetes Playground]<br />
* [https://labs.play-with-k8s.com Play with k8s]<br />
===Tools===<br />
* [https://github.com/kubernetes/minikube minikube] &mdash; Run Kubernetes locally<br />
* [https://kind.sigs.k8s.io/ kind] &mdash; '''K'''ubernetes '''IN''' '''D'''ocker (local clusters for testing Kubernetes)<br />
* [https://github.com/kubernetes/kops kops] &mdash; Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management<br />
* [https://kubernetes-incubator.github.io/kube-aws kube-aws] &mdash; a command-line tool to create/update/destroy Kubernetes clusters on AWS<br />
* [https://github.com/kubernetes-incubator/kubespray kubespray] &mdash; Deploy a production ready kubernetes cluster<br />
* [https://rook.io/ Rook.io] &mdash; File, Block, and Object Storage Services for your Cloud-Native Environments<br />
===Resources===<br />
* [https://kubernetes.io/docs/getting-started-guides/scratch/ Creating a Custom Cluster from Scratch]<br />
* [https://github.com/kelseyhightower/kubernetes-the-hard-way Kubernetes The Hard Way]<br />
* [http://k8sport.org/ K8sPort]<br />
* [https://k8s.af/ Kubernetes Failure Stories]<br />
<br />
===Training===<br />
* [https://kubernetes.io/training/ Official Kubernetes Training Website]<br />
** Kubernetes and Cloud Native Associate (KCNA)<br />
** Certified Kubernetes Application Developer (CKAD)<br />
** Certified Kubernetes Administrator (CKA)<br />
** Certified Kubernetes Security Specialist (CKS) [note: Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS.]<br />
* [https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals Kubernetes Fundamentals] (LFS258)<br />
** ''[https://www.cncf.io/certification/expert/ Certified Kubernetes Administrator]'' (PKA) certification.<br />
* [https://killer.sh/ CKS / CKA / CKAD Simulator]<br />
* [https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/ 11 Ways (Not) to Get Hacked]<br />
<br />
===Blog posts===<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727 Understanding kubernetes networking: pods] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82 Understanding kubernetes networking: services] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-ingress-1bc341c84078 Understanding kubernetes networking: ingress] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b Kubernetes ConfigMaps and Secrets - Part 1] &mdash; by Sandeep Dinesh, 2017-07-13<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-part-2-3dc37111f0dc Kubernetes ConfigMaps and Secrets - Part 2] &mdash; by Sandeep Dinesh, 2017-08-08<br />
* [https://abhishek-tiwari.com/10-open-source-tools-for-highly-effective-kubernetes-sre-and-ops-teams/ 10 open-source Kubernetes tools for highly effective SRE and Ops Teams]<br />
* [https://www.ianlewis.org/en/tag/kubernetes Series of blog posts about k8s] &mdash; by Ian Lewis<br />
* [https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?] &mdash; by Sandeep Dinesh, 2018-03-11<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:DevOps]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Travel_Log&diff=8283Category:Travel Log2023-12-05T19:06:04Z<p>Christoph: /* Flights */</p>
<hr />
<div>This category will be my, as yet, unorganised '''Travel Log''' to many places around the world. (Note: The following is very much an ''incomplete'' travel log.)<br />
<br />
== Auto ==<br />
<br />
===Berlin trip (2006)===<br />
* Monaco &rarr; Milano &rarr; Ljubljana &rarr; Rotterdam &rarr; Berlin &rarr; Copenhagen &rarr; Monaco: April 2006<br />
: [http://triptracker.net/trip/1165/ TripTracker]<br />
: 1-Apr-2006 (14h20): Monaco &rarr; Milano<br />
: 2-Apr-2006 (23h30): Milano &rarr; Ljubljana<br />
: 3-Apr-2006 &ndash; 5-Apr-2006: Slovenia (Ljubljana, Novo Mesto, Kranj, Postojna, Jesenice, etc.)<br />
: 5-Apr-2006 (12h30): |&larr; Austria (Villach)<br />
: 5-Apr-2006 (15h15): |&larr; Germany<br />
: 5-Apr-2006 (19h15): Stuttgart<br />
: 5-Apr-2006 (20h20): Karlsruhe<br />
: 5-Apr-2006 (23h30): Köln<br />
: 5-Apr-2006 (00h10): |&larr; The Netherlands<br />
: 5-Apr-2006 (02h00): Rotterdam<br />
: 7-Apr-2006 (12h00): |&rarr; Rotterdam<br />
: 7-Apr-2006 (14h45): |&larr; Germany<br />
: 7-Apr-2006 (17h00): Hannover<br />
: 7-Apr-2006 (18h30): Magdeburg<br />
: 7-Apr-2006 (20h00): Berlin<br />
: 8-Apr-2006 (15h30): |&rarr; Berlin<br />
: 8-Apr-2006 (18h00): Rostock<br />
: 8-Apr-2006 (19h30): Ferry (|&rarr; Germany from Rostock Harb.)<br />
: 8-Apr-2006 (21h15): Ferry (|&larr; Denmark at Gedsen)<br />
: 8-Apr-2006 (23h20): København<br />
: 9-Apr-2006 (06h30): |&rarr; København<br />
: 9-Apr-2006 (09h00): Ferry (|&rarr; Denmark from Gedsen)<br />
: 9-Apr-2006 (11h00): Ferry (|&larr; Germany at Rostock Harb.)<br />
: 9-Apr-2006 (13h30): |&larr; Berlin<br />
: 9-Apr-2006 (14h00): |&rarr; Berlin<br />
: 9-Apr-2006 (15h50): Dresden<br />
:10-Apr-2006 (00h45): |&larr; Slovenia<br />
:10-Apr-2006 (01h40): Ljubljana<br />
:10-Apr-2006 (02h40): Postojna<br />
:10-Apr-2006 (13h15): |&larr; Italy<br />
:10-Apr-2006 (15h00): Padova<br />
:10-Apr-2006 (15h40): Verona<br />
:10-Apr-2006 (18h50): Genova<br />
:10-Apr-2006 (20h35): |&larr; France<br />
:10-Apr-2006 (20h45): |&larr; Monaco<br />
<br />
===Canada trip (2001)===<br />
''Note: The total trip covered 11,893 km (7,390 miles).''<br />
*Corvallis, OR &rarr; Boston, MA &rarr; Quebec &rarr; Ontario &rarr; Manitoba &rarr; Saskatchewan &rarr; Alberta &rarr; British Columbia &rarr; Corvallis, OR<br />
** 01-Sep-2001 (??h??): |&rarr; Corvallis, OR<br />
** 06-Sep-2001 (15h45): |&larr; Massachusetts<br />
** 13-Sep-2001 (13h15): |&rarr; Westborough, MA<br />
** 13-Sep-2001 (17h46): Augusta, ME<br />
** 13-Sep-2001 (18h15): |&larr; CANADA (into Quebec)<br />
** 14-Sep-2001 (02h06): Grande Allee Est., Quebec<br />
** 14-Sep-2001 (15h01): Cap-Madeleine, PQ<br />
** 15-Sep-2001 (17h44): Thunder Bay, ON<br />
** 14-Sep-2001 (17h45): |&larr; Ontario<br />
** 14-Sep-2001 (20h03): Cobden, ON<br />
** 15-Sep-2001 (12h02): Sudbury, ON<br />
** 15-Sep-2001 (10h25): Wawa, ON<br />
** 15-Sep-2001 (22h01): Kenora, ON<br />
** 15-Sep-2001 (10h37): |&larr; Manitoba<br />
** 16-Sep-2001 (10h53): Brandon, MB<br />
** 16-Sep-2001 (12h50): |&larr; Saskatchewan<br />
** 16-Sep-2001 (16h09): Herbert, SK<br />
** 16-Sep-2001 (18h06): |&larr; Alberta<br />
** 16-Sep-2001 (23h00): |&larr; British Columbia<br />
** 17-Sep-2001 (00h30): |&larr; USA (into Idaho)<br />
** 17-Sep-2001 (03h36): Coeur d'Alene, ID<br />
** 17-Sep-2001 (05h30): |&larr; Oregon<br />
<br />
===Ireland trip (1999-2000)===<br />
* 26-Dec-1999 (??h??): Dublin, Ireland<br />
* 26-Dec-1999 (16h13): Lord Edward St., Dublin<br />
* 27-Dec-1999 (??h??): Kinlay House, Christchurch, 2-12 Lord Edward St., Dublin, Ireland<br />
* 2?-Dec-1999 (??h??): Kilkenny<br />
* 28-Dec-1999 (12h27): Patrick St., Cork<br />
* 28-Dec-1999 (17h12): Mallow, Co. Cork<br />
* 29-Dec-1999 (??h??): Co. Kerry<br />
* ??-Dec-1999 (??h??): Saratoga House (Bed & Breakfast), Muckross Road, Killarney, Ireland<br />
* 29-Dec-1999 (15h09): Chapel St., Limerick<br />
* 29-Dec-1999 (15h18): Eimear<br />
* 30-Dec-1999 (??h??): Ballybofey<br />
* 30-Dec-1999 (15h51): Greysteel<br />
* 30-Dec-1999 (??h??): O'Connell St., Sligo<br />
* 30-Dec-1999 (??h??): Petra, Galway<br />
* 30-Dec-1999 (??h??): Sligo<br />
* 30-Dec-1999 (??h??): The Linen House Backpackers Hostel, 18-20 Kent Street, Belfast, Ireland<br />
* 01-Jan-2000 (14h46): Arthur Sq., Belfast<br />
* 02-Jan-2000 (06h34): Dublin Airport<br />
<br />
===Miscellaneous (Europe)===<br />
* Budapest, Hungary &rarr; Dubrovnik, Croatia: June/July 2018 (round-trip)<br />
* ''The Cliffs of Møn'', DK: Oct-2005<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Chiemsee, Germany: Oct-1996 (round-trip)<br />
* Zagreb, Croatia &rarr; Ljubjlana, Slovenia &rarr; Graz, Austria &rarr; Budapest, Hungary: Sep-1996<br />
* Zagreb, Croatia &rarr; Ljubljana, Slovenia: Sep-1996 (round-trip)<br />
* Budapest, Hungary &rarr; Zagreb, Croatia: Sep-1996<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Berchtesgaden, Germany &rarr; Innsbruck, Austria &rarr; Liechtenstein &rarr; Switzerland: Aug-1996 (round-trip)<br />
* Warsaw, Poland &rarr; Budapest, Hungary: September 1994<br />
* Budapest, Hungary &rarr; Slovakia (11-Nov-1993) &rarr; Warsaw, Poland: November 1993<br />
* Vienna, Austria &rarr; Budapest, Hungary: 28-Sep-1993<br />
<br />
===Miscellaneous (South America)===<br />
* Cuenca, Ecuador &rarr; Riobamba, Ecuador &rarr; Ambato, Ecuador &rarr; Quito, Ecuador: 1993 (round-trip)<br />
* Quito, Ecuador &#187; Ipiales, Colombia: 1993 (round-trip)<br />
* Guayaquil, Ecuador &rarr; Santo Domingo de Los Colorados, Ecuador &rarr; Quito, Ecuador: 1993<br />
* Guayaquil, Ecuador &rarr; Salinas, Ecuador: 1993 (round-trip)<br />
* Tumbes, Peru &rarr; Guayaquil, Ecuador: 21-Dec-1992<br />
<br />
===Miscellaneous (North America)===<br />
* Seattle, WA &#187; Chelan, WA &#187; Seattle, WA: July 2023 (576 km/358 mi)<br />
* Seattle, WA &#187; Cle Elum, WA &#187; Chelan, WA &#187; Republic, WA &#187; Leavenworth, WA &#187; Monroe, WA &#187; Seattle, WA: April 2023 (933 km/580 mi)<br />
* Seattle, WA &#187; Winthrop, WA &#187; Leavenworth, WA &#187; Issaquah, WA &#187; Seattle, WA: June 2022<br />
* Seattle, WA &#187; Winthrop, WA &#187; Tiger, WA &#187; Spokane, WA &#187; Seattle, WA: May 2022 (1,200 km/744 mi)<br />
* Seattle, WA &#187; Portland, OR &#187; Grants Pass, OR &#187; Crescent City, CA &#187; Redwood National Forest &#187; Newport, OR &#187; Astoria, OR &#187; Elma, WA &#187; Seattle, WA: November 2021 (1,881 km/1,169 mi)<br />
* Seattle, WA &#187; Mt Saint Helens &#187; Mt Adams &#187; Stonehenge Memorial &#187; Multnomah Falls &#187; Seattle, WA: September 2021 (914 km/568 mi)<br />
* Seattle, WA &#187; Walla Walla, OR &#187; Joseph, OR &#187; Lewiston, ID &#187; Grand Coulee, WA &#187; Seattle, WA: June 2021 (1,421 km/883 mi)<br />
* Seattle, WA &#187; Pendleton, OR &#187; Craters of the Moon National Monument & Preserve &#187; Idaho Springs, ID &#187; Jackson, WY &#187; Grand Teton National Park &#187; Yellowstone National Park &#187; Missoula, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: September 2020 (2,746 km/1,706 mi)<br />
* Seattle, WA &#187; Coeur d'Alene, ID &#187; Missoula, MT &#187; Glacier National Park, MT &#187; Seattle, WA: July 2019 (1,984 km/1,233 mi)<br />
* Seattle, WA &#187; Corvallis, OR: November 2018 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2017 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2016 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2015 (round-trip)<br />
* Texas &#187; Oklahoma &#187; Kansas &#187; Nebraska &#187; South Dakota &#187; Wyoming &#187; Montana &#187; Idaho &#187; Seattle, WA: September 2015 (4,000 km/4,290 mi)<br />
* Seattle, WA &#187; Oregon &#187; Idaho &#187; Utah &#187; Wyoming &#187; Colorado &#187; Kansas &#187; Oklahoma &#187; Texas: 11-16 May 2013<br />
* Seattle, WA &#187; Port Angeles, WA &#187; Hurricane Ridge, WA: 28-Dec-2012 (round-trip)<br />
* Seattle, WA &#187; Portland, OR: 4-Dec-2012 (round-trip)<br />
* Chicago, IL &#187; Milwaukee, WI &#187; Minneapolis, MN &#187; Fargo, ND &#187; Billings, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: 25-26 June 2012 (3,357 km/2,086 mi)<br />
* St. Louis, MO &#187; Chicago, IL: 31-Dec-2011<br />
* Chicago, IL &#187; St. Louis, MO: 5-Jul-2011<br />
* Milwaukee, WI &#187; Chicago, IL: 30-Jun-2011<br />
* Pittsburgh, PA &#187; New York City, NY: April 2005 (round-trip)<br />
* Pittsburgh, PA &#187; Bethlehem, PA &#187; Westborough, MA &#187; New York City, NY: December 2004 (round-trip)<br />
* Pittsburgh, PA &#187; Boston, MA: November 2004 (round-trip)<br />
* Corvallis, OR &#187; Salt Lake City, UT &#187; Houston, TX &#187; Atlanta, GA &#187; Pittsburgh, PA: September 2004<br />
* Corvallis, OR &#187; Boston, MA: 2001, 2002 (round-trip)<br />
* Corvallis, OR &#187; Vancouver, BC, Canada (round-trip)<br />
* Corvallis, OR &#187; Tijuana, Mexico: 7-Sep-1999 (round-trip)<br />
* Los Angeles, CA &#187; Corvallis, OR: January 1998<br />
* Houston, TX &#187; Milwaukee, WI &#187; Menominee, MI: May 1995 (round-trip)<br />
<br />
== Bus / Train / Ferry ==<br />
===Spain trip (2006)===<br />
* Monaco &#187; Cannes &#187; Marseille &#187; Montpellier St-Ro &#187; Barcelona; April 2006 (round-trip)<br />
** 24-Apr-06 18h35: |&rarr; Nice, France [SNCF train]<br />
** 24-Apr-06 19h00: Antibes, FR<br />
** 24-Apr-06 19h07: Cannes, FR<br />
** 24-Apr-06 19h30: B. sur-Mer, FR<br />
** 24-Apr-06 19h39: San Raphael-Valescure, FR<br />
** 24-Apr-06 20h14: Les Arcs-Drag., FR<br />
** 24-Apr-06 20h56: Toulon, FR<br />
** 24-Apr-06 21h35: Marseille, FR<br />
** 25-Apr-06 15h05: |&rarr; Marseille, FR<br />
** 25-Apr-06 16h16: Nîmes, FR<br />
** 25-Apr-06 17h21: Montpellier St-Ro, FR<br />
** 25-Apr-06 18h42: Béziers, FR<br />
** 25-Apr-06 19h35: Perpignan, FR<br />
** 25-Apr-06 20h15: Portbou, Spain (ES) [''border'']<br />
** 25-Apr-06 22h30: Barcelona, ES<br />
** 27-Apr-06 19h24: |&rarr; Barcelona, ES [Renfe train]<br />
** 27-Apr-06 22h05: Cerbere, FR [''border'']<br />
** 28-Apr-06 08h37: Nice, FR<br />
** 28-Apr-06 10h00: Monaco<br />
<br />
===Miscellaneous (Europe)===<br />
* Tallinn, Estonia &rarr; Helsinki, Finland: January 2020 (round-trip)<br />
* Lisbon, Portugal &rarr; Porto, Portugal: Nov-2016 (round-trip)<br />
* København, DK &#187; Berlin, D: 09-Apr-2006 [+Ferry]<br />
* Berlin, D &#187; København, DK: 08-Apr-2006 (15h15) [+Ferry]<br />
* Ljubljana, Slovenia &#187; Villach HBF, Austria: 18-Aug-1997<br />
* Stockholm C &#187; Oslo S: 15-Aug-1997 (SJ train)<br />
* Salzburg, Austria &#187; Ljubljana, Slovenia: 25-Aug-1997 (&#214;sterreichische Bundesbahnen train (&#214;BB))<br />
* Haslev, DK &#187; Næstved, DK: 24-Aug-1997 (DSB train)<br />
* København &#187; Stockholm C: 14-Aug-1997 (DSB train)<br />
* Oslo S &#187; Bergen: 16-Aug-1997<br />
* Næstved, DK &#187; Rødby Færge, DK: 24-Aug-1997<br />
* Salzburg HBF &#187; Villach HBF (&uuml;ber Schwarzach-St. veit Bad Gastein): 25-Aug-1997 (&#214;BB train)<br />
* Oslo S &#187; Trondheim: 18-Aug-1997<br />
* Grensen (Scandinavia): 16-Aug-1997<br />
* Abisko Turiststation - STF: 20-Aug-1997<br />
* Abisko Turiststation - STF: 21-Aug-1997<br />
* Germany: 24-Aug-1997 (DB train)<br />
* Stockholm S:T Eriksgatan: 15-Aug-1997<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Jun-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Mar-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: (28-Nov-1997/30-Nov-1997) (round-trip)<br />
* Budapest, Hungary &rarr; Ljubljana, Slovenia: 8-Nov-1996<br />
* Budapest, Hungary &rarr; Slovakia: 18-Aug-1995 (round-trip)<br />
* Budapest, Hungary &rarr; Vienna, Austria: 9-Feb-1995 (round-trip)<br />
* Moscow, Russia &rarr; Warsaw, Poland: Sep-1994<br />
* Moscow, Russia &rarr; Brest, Belarus: Aug-1994 (round-trip)<br />
* Moscow, Russia &rarr; Minsk, Belarus: Jul-1994 (round-trip)<br />
* Warsaw, Poland &#187; Moscow, Russia: Jun-1994<br />
* Warsaw, Poland &rarr; Vilnius, Lithuania &rarr; Riga, Latvia: (12-Jan-1994/??-Jan-1994) (round-trip)<br />
<br />
===Miscellaneous (South America)===<br />
* Arequipa, Peru &rarr; Lima, Peru: 1992<br />
* Arequipa, Peru &rarr; Iquique, Chile: (17-Jul-1992/20-Jul-1992) (round-trip)<br />
* Lima, Peru &rarr; Arequipa, Peru: 1992<br />
* Lima, Peru &rarr; La Paz, Bolivia: (19-May-1991/6-Jun-1991) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (29-Nov-1990/11-Dec-1990) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (6-Jul-1990/20-Jul-1990) (round-trip)<br />
<br />
==Flights==<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2023 [RT]<br />
* Seattle, WA (SEA) ✈ New York City, NY (JFK): October 2023 [RT] {~5-6 hours x 2}<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX): March 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): February 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2022 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): August 2022 [RT]<br />
* Kyiv, Ukraine (KBP) ✈ Frankfurt, Germany (FRA) ✈ Seattle, WA (SEA): December 2021<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ Frankfurt, Germany (FRA) ✈ Kyiv, Ukraine (KBP): December 2021<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2021 [RT]<br />
* Memphis, TN (MEM) ✈ Atlanta, GA (ATL) ✈ Seattle, WA (SEA): June 2021<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC) ✈ Memphis, TN (MEM): June 2021<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): May 2021 [RT]<br />
* Tallinn, Estonia (TLL) ✈ Stockholm, Sweden (ARN) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): January 2020<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ København, DK (CPH) ✈ Helsinki, Finland (HEL) ✈ Tallinn, Estonia (TLL): December 2019<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): October 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Miami, FL (MIA): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): August 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Denver, CO (DEN): May 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Charlotte, NC (CLT): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Santa Ana, CA (SNA): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): September 2018 [RT]<br />
* Budapest, Hungary (BUD) ✈ Brussels, Belgium (BRU) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): July 2018<br />
* Seattle, WA (SEA) ✈ Toronto, Canada (YYZ) ✈ Budapest, Hungary (BUD): June 2018<br />
* Seattle, WA (SEA) ✈ Reno, NV (RNO): May 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Reykjavík, Iceland (RKV): December 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Kona, Hawaii (KOA): September 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC): August 2017 [RT]<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): November 2016<br />
* Lisbon, Portugal ✈ Amsterdam, NL (AMS): November 2016<br />
* Paris, FR (CGD) ✈ Lisbon, Portugal: November 2016<br />
* Seattle, WA (SEA) ✈ Paris, FR (CDG): November 2016<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): November 2016 [RT]<br />
* Seattle, WA (SEA) ✈ Las Vegas, NV (LAS): June 2016 [RT]<br />
* Houston, TX (IAH) ✈ Seattle, WA (SEA): September 2015 [RT]<br />
* Houston, TX (IAH) ✈ San Francisco, CA (SFO): August 2015 [RT]<br />
* Houston, TX (IAH) ✈ Madison, WI (MSN): March 2015 [RT]<br />
* Houston, TX (IAH) ✈ Amsterdam, NL (AMS): March 2015 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee (MKE): June 2011<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX) ✈ Chicago, IL (ORD): October 2010 [RT]<br />
* Seattle, WA (SEA) ✈ Los Angeles, CA (LAX): December 2007 [RT]<br />
* København, DK (CPH) ✈ Seattle, WA (SEA): June 2006<br />
* Heathrow, UK ✈ København, DK (CPH): June 2006<br />
* Nice, FR ✈ Heathrow, UK: June 2006<br />
* København, DK (CPH) ✈ Nice, FR (NCE): February 2006<br />
* Washington Dulles ✈ København, DK: August 2005<br />
* Pittsburgh, PA (PIT) ✈ Washington Dulles: August 2005<br />
* Portland, OR (PDX) ✈ Pittsburgh, PA (PIT): Summer 2004 [RT]<br />
* Eugene, OR ✈ Houston, TX (IAH): February 2002 [RT]<br />
* Portland, OR (PDX) ✈ Boston, MA: December 2002 [RT]<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): January 2000<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): January 2000<br />
* Dublin, Ireland ✈ Amsterdam, NL (AMS): January 2000<br />
* Amsterdam (AMS) ✈ Dublin, Ireland: December 1999<br />
* Seattle, WA (SEA) ✈ Amsterdam, NL (AMS): December 1999<br />
* Portland, OR (PDX) ✈ Seattle, WA (SEA): December 1999<br />
* Chicago (ORD) ✈ Los Angeles (LAX): December 1997<br />
* Greenbay, WI (GRB) ✈ Chicago (ORD): December 1997<br />
* Chicago (ORD) ✈ Greenbay, WI (GRB): December 1997<br />
* Rome, Italy (FCO) ✈ Chicago, IL (ORD): December 1997<br />
* Trieste, Italy (TRS) ✈ Rome, Italy (FCO): December 1997<br />
* Houston, TX (IAH) ✈ Budapest, Hungary (BUD): July 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: June 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: March 1996 [RT]<br />
* Narita, Japan ✈ Taipei, Taiwan: December 1995 [RT]<br />
* Los Angeles, CA (LAX) ✈ Narita, Japan: October 1995<br />
* Houston, TX (IAH) ✈ Los Angeles (LAX): October 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): September 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): May 1995 [RT]<br />
* Paris, FR (CGD) ✈ Vienna, Austria: September 1993<br />
* Quito, Ecuador ✈ Caracas, Venezuela (CCS) ✈ Paris, France: 1993<br />
* Lima, Peru ✈ Tumbes, Peru: December 1992<br />
* Boston, MA ✈ Miami, FL ✈ Lima, Peru: <br />
* Amsterdam, NL (AMS) ✈ Chicago, IL (ORD): <br />
* Boston, MA ✈ Amsterdam, NL (AMS):<br />
<br />
== Individual Places ==<br />
=== Ireland ===<br />
* Dublin<br />
** '''Dublin''' (Baile &Ntilde;tha Cliath)<br />
* Kildare<br />
** Naas<br />
* Laois<br />
* Carlow<br />
** Carlow (Ceatharlach)<br />
** Royal Oak<br />
* Kilkenny<br />
** '''Kilkenny''' (Cill Chainnigh)<br />
** Callan<br />
* Tipperary<br />
** Glenbower<br />
** Clonmel (Cluian Meala)<br />
** Cahir<br />
** Burncourt<br />
* Cork<br />
** Fermoy<br />
** '''Cork''' (Coroaigh)<br />
** Fota<br />
** Cobh (An C&oacute;bh)<br />
** '''Blarney'''<br />
** Macroom<br />
** Ballyvourney<br />
* Kerr<br />
** ''Derrynasaggart Mts''<br />
** Poulgorm Br<br />
** '''Killarney''' (Cill Airne)<br />
** Farranfore<br />
* Limerick<br />
** Abbeyfeale<br />
** ''Mullaghareirk Mts''<br />
** Newcastle West<br />
** Croagh<br />
** '''Limerick''' (Luimneach)<br />
* Clare<br />
** Bunratty<br />
** Ennis (Inis)<br />
** Ennistymon<br />
** Liscannor<br />
** ''Cliffs of Moher''<br />
** Doolin<br />
** Lisdoonvarna<br />
** Ballyvaughan<br />
** Bealaclugga<br />
** Burren<br />
* Galway<br />
** Kinvarra<br />
** Ballinderreen<br />
** Oranmore<br />
** '''Galway''' (Gaillimh)<br />
** Claregalway<br />
** Tuam<br />
* Mayo<br />
** Claremorris<br />
** Cloonfallagh<br />
** Charlestown<br />
* Sligo<br />
** Curry<br />
** Tubbercurry<br />
** Collooney<br />
** '''Sligo''' (Sligeach)<br />
** ''Dartry Mts''<br />
* Leitrim<br />
* Donegal<br />
** Bundoran<br />
** Ballyshannon<br />
** Donegal (D&uacute;n na nGall)<br />
** Ballybofey<br />
** Clady<br />
* Tyrone<br />
** '''Strabane''' (Northern Ireland)<br />
* Londonderry<br />
** Derry (Londonderry)<br />
** Eglinton<br />
** Ballykelly<br />
** Limavady<br />
** Coleraine<br />
* Antrim<br />
** Derrykelghan<br />
** Moss-side<br />
** Ballycastle<br />
** ''Antrim Hills''<br />
** Ballintoy<br />
** ''Carrick-a-Rede Rope Bridge''<br />
** ''Giants Causeway''<br />
** Craignamaddy<br />
** Ballymoney<br />
** Ballymena<br />
** Antrim<br />
** ''Lough Neagh'' (lake)<br />
** Dunadry<br />
** Newtownabbey<br />
** '''Belfast'''<br />
* Down<br />
** Lisburn<br />
** Banbridge<br />
* Armagh<br />
** Newry<br />
* Louth<br />
** Dundalk (Dun Dealgan)<br />
** Dunleen<br />
** Drogheda (Droichead Atha)<br />
* Meath<br />
** Julianstown<br />
* Dublin<br />
** Balbriggan<br />
** Swords<br />
<br />
[[Category:World Travels]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Travel_Log&diff=8282Category:Travel Log2023-10-08T20:29:56Z<p>Christoph: /* Flights */</p>
<hr />
<div>This category will be my, as yet, unorganised '''Travel Log''' to many places around the world. (Note: The following is very much an ''incomplete'' travel log.)<br />
<br />
== Auto ==<br />
<br />
===Berlin trip (2006)===<br />
* Monaco &rarr; Milano &rarr; Ljubljana &rarr; Rotterdam &rarr; Berlin &rarr; Copenhagen &rarr; Monaco: April 2006<br />
: [http://triptracker.net/trip/1165/ TripTracker]<br />
: 1-Apr-2006 (14h20): Monaco &rarr; Milano<br />
: 2-Apr-2006 (23h30): Milano &rarr; Ljubljana<br />
: 3-Apr-2006 &ndash; 5-Apr-2006: Slovenia (Ljubljana, Novo Mesto, Kranj, Postojna, Jesenice, etc.)<br />
: 5-Apr-2006 (12h30): |&larr; Austria (Villach)<br />
: 5-Apr-2006 (15h15): |&larr; Germany<br />
: 5-Apr-2006 (19h15): Stuttgart<br />
: 5-Apr-2006 (20h20): Karlsruhe<br />
: 5-Apr-2006 (23h30): Köln<br />
: 5-Apr-2006 (00h10): |&larr; The Netherlands<br />
: 5-Apr-2006 (02h00): Rotterdam<br />
: 7-Apr-2006 (12h00): |&rarr; Rotterdam<br />
: 7-Apr-2006 (14h45): |&larr; Germany<br />
: 7-Apr-2006 (17h00): Hannover<br />
: 7-Apr-2006 (18h30): Magdeburg<br />
: 7-Apr-2006 (20h00): Berlin<br />
: 8-Apr-2006 (15h30): |&rarr; Berlin<br />
: 8-Apr-2006 (18h00): Rostock<br />
: 8-Apr-2006 (19h30): Ferry (|&rarr; Germany from Rostock Harb.)<br />
: 8-Apr-2006 (21h15): Ferry (|&larr; Denmark at Gedsen)<br />
: 8-Apr-2006 (23h20): København<br />
: 9-Apr-2006 (06h30): |&rarr; København<br />
: 9-Apr-2006 (09h00): Ferry (|&rarr; Denmark from Gedsen)<br />
: 9-Apr-2006 (11h00): Ferry (|&larr; Germany at Rostock Harb.)<br />
: 9-Apr-2006 (13h30): |&larr; Berlin<br />
: 9-Apr-2006 (14h00): |&rarr; Berlin<br />
: 9-Apr-2006 (15h50): Dresden<br />
:10-Apr-2006 (00h45): |&larr; Slovenia<br />
:10-Apr-2006 (01h40): Ljubljana<br />
:10-Apr-2006 (02h40): Postojna<br />
:10-Apr-2006 (13h15): |&larr; Italy<br />
:10-Apr-2006 (15h00): Padova<br />
:10-Apr-2006 (15h40): Verona<br />
:10-Apr-2006 (18h50): Genova<br />
:10-Apr-2006 (20h35): |&larr; France<br />
:10-Apr-2006 (20h45): |&larr; Monaco<br />
<br />
===Canada trip (2001)===<br />
''Note: The total trip covered 11,893 km (7,390 miles).''<br />
*Corvallis, OR &rarr; Boston, MA &rarr; Quebec &rarr; Ontario &rarr; Manitoba &rarr; Saskatchewan &rarr; Alberta &rarr; British Columbia &rarr; Corvallis, OR<br />
** 01-Sep-2001 (??h??): |&rarr; Corvallis, OR<br />
** 06-Sep-2001 (15h45): |&larr; Massachusetts<br />
** 13-Sep-2001 (13h15): |&rarr; Westborough, MA<br />
** 13-Sep-2001 (17h46): Augusta, ME<br />
** 13-Sep-2001 (18h15): |&larr; CANADA (into Quebec)<br />
** 14-Sep-2001 (02h06): Grande Allee Est., Quebec<br />
** 14-Sep-2001 (15h01): Cap-Madeleine, PQ<br />
** 15-Sep-2001 (17h44): Thunder Bay, ON<br />
** 14-Sep-2001 (17h45): |&larr; Ontario<br />
** 14-Sep-2001 (20h03): Cobden, ON<br />
** 15-Sep-2001 (12h02): Sudbury, ON<br />
** 15-Sep-2001 (10h25): Wawa, ON<br />
** 15-Sep-2001 (22h01): Kenora, ON<br />
** 15-Sep-2001 (10h37): |&larr; Manitoba<br />
** 16-Sep-2001 (10h53): Brandon, MB<br />
** 16-Sep-2001 (12h50): |&larr; Saskatchewan<br />
** 16-Sep-2001 (16h09): Herbert, SK<br />
** 16-Sep-2001 (18h06): |&larr; Alberta<br />
** 16-Sep-2001 (23h00): |&larr; British Columbia<br />
** 17-Sep-2001 (00h30): |&larr; USA (into Idaho)<br />
** 17-Sep-2001 (03h36): Coeur d'Alene, ID<br />
** 17-Sep-2001 (05h30): |&larr; Oregon<br />
<br />
===Ireland trip (1999-2000)===<br />
* 26-Dec-1999 (??h??): Dublin, Ireland<br />
* 26-Dec-1999 (16h13): Lord Edward St., Dublin<br />
* 27-Dec-1999 (??h??): Kinlay House, Christchurch, 2-12 Lord Edward St., Dublin, Ireland<br />
* 2?-Dec-1999 (??h??): Kilkenny<br />
* 28-Dec-1999 (12h27): Patrick St., Cork<br />
* 28-Dec-1999 (17h12): Mallow, Co. Cork<br />
* 29-Dec-1999 (??h??): Co. Kerry<br />
* ??-Dec-1999 (??h??): Saratoga House (Bed & Breakfast), Muckross Road, Killarney, Ireland<br />
* 29-Dec-1999 (15h09): Chapel St., Limerick<br />
* 29-Dec-1999 (15h18): Eimear<br />
* 30-Dec-1999 (??h??): Ballybofey<br />
* 30-Dec-1999 (15h51): Greysteel<br />
* 30-Dec-1999 (??h??): O'Connell St., Sligo<br />
* 30-Dec-1999 (??h??): Petra, Galway<br />
* 30-Dec-1999 (??h??): Sligo<br />
* 30-Dec-1999 (??h??): The Linen House Backpackers Hostel, 18-20 Kent Street, Belfast, Ireland<br />
* 01-Jan-2000 (14h46): Arthur Sq., Belfast<br />
* 02-Jan-2000 (06h34): Dublin Airport<br />
<br />
===Miscellaneous (Europe)===<br />
* Budapest, Hungary &rarr; Dubrovnik, Croatia: June/July 2018 (round-trip)<br />
* ''The Cliffs of Møn'', DK: Oct-2005<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Chiemsee, Germany: Oct-1996 (round-trip)<br />
* Zagreb, Croatia &rarr; Ljubjlana, Slovenia &rarr; Graz, Austria &rarr; Budapest, Hungary: Sep-1996<br />
* Zagreb, Croatia &rarr; Ljubljana, Slovenia: Sep-1996 (round-trip)<br />
* Budapest, Hungary &rarr; Zagreb, Croatia: Sep-1996<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Berchtesgaden, Germany &rarr; Innsbruck, Austria &rarr; Liechtenstein &rarr; Switzerland: Aug-1996 (round-trip)<br />
* Warsaw, Poland &rarr; Budapest, Hungary: September 1994<br />
* Budapest, Hungary &rarr; Slovakia (11-Nov-1993) &rarr; Warsaw, Poland: November 1993<br />
* Vienna, Austria &rarr; Budapest, Hungary: 28-Sep-1993<br />
<br />
===Miscellaneous (South America)===<br />
* Cuenca, Ecuador &rarr; Riobamba, Ecuador &rarr; Ambato, Ecuador &rarr; Quito, Ecuador: 1993 (round-trip)<br />
* Quito, Ecuador &#187; Ipiales, Colombia: 1993 (round-trip)<br />
* Guayaquil, Ecuador &rarr; Santo Domingo de Los Colorados, Ecuador &rarr; Quito, Ecuador: 1993<br />
* Guayaquil, Ecuador &rarr; Salinas, Ecuador: 1993 (round-trip)<br />
* Tumbes, Peru &rarr; Guayaquil, Ecuador: 21-Dec-1992<br />
<br />
===Miscellaneous (North America)===<br />
* Seattle, WA &#187; Chelan, WA &#187; Seattle, WA: July 2023 (576 km/358 mi)<br />
* Seattle, WA &#187; Cle Elum, WA &#187; Chelan, WA &#187; Republic, WA &#187; Leavenworth, WA &#187; Monroe, WA &#187; Seattle, WA: April 2023 (933 km/580 mi)<br />
* Seattle, WA &#187; Winthrop, WA &#187; Leavenworth, WA &#187; Issaquah, WA &#187; Seattle, WA: June 2022<br />
* Seattle, WA &#187; Winthrop, WA &#187; Tiger, WA &#187; Spokane, WA &#187; Seattle, WA: May 2022 (1,200 km/744 mi)<br />
* Seattle, WA &#187; Portland, OR &#187; Grants Pass, OR &#187; Crescent City, CA &#187; Redwood National Forest &#187; Newport, OR &#187; Astoria, OR &#187; Elma, WA &#187; Seattle, WA: November 2021 (1,881 km/1,169 mi)<br />
* Seattle, WA &#187; Mt Saint Helens &#187; Mt Adams &#187; Stonehenge Memorial &#187; Multnomah Falls &#187; Seattle, WA: September 2021 (914 km/568 mi)<br />
* Seattle, WA &#187; Walla Walla, OR &#187; Joseph, OR &#187; Lewiston, ID &#187; Grand Coulee, WA &#187; Seattle, WA: June 2021 (1,421 km/883 mi)<br />
* Seattle, WA &#187; Pendleton, OR &#187; Craters of the Moon National Monument & Preserve &#187; Idaho Springs, ID &#187; Jackson, WY &#187; Grand Teton National Park &#187; Yellowstone National Park &#187; Missoula, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: September 2020 (2,746 km/1,706 mi)<br />
* Seattle, WA &#187; Coeur d'Alene, ID &#187; Missoula, MT &#187; Glacier National Park, MT &#187; Seattle, WA: July 2019 (1,984 km/1,233 mi)<br />
* Seattle, WA &#187; Corvallis, OR: November 2018 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2017 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2016 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2015 (round-trip)<br />
* Texas &#187; Oklahoma &#187; Kansas &#187; Nebraska &#187; South Dakota &#187; Wyoming &#187; Montana &#187; Idaho &#187; Seattle, WA: September 2015 (4,000 km/4,290 mi)<br />
* Seattle, WA &#187; Oregon &#187; Idaho &#187; Utah &#187; Wyoming &#187; Colorado &#187; Kansas &#187; Oklahoma &#187; Texas: 11-16 May 2013<br />
* Seattle, WA &#187; Port Angeles, WA &#187; Hurricane Ridge, WA: 28-Dec-2012 (round-trip)<br />
* Seattle, WA &#187; Portland, OR: 4-Dec-2012 (round-trip)<br />
* Chicago, IL &#187; Milwaukee, WI &#187; Minneapolis, MN &#187; Fargo, ND &#187; Billings, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: 25-26 June 2012 (3,357 km/2,086 mi)<br />
* St. Louis, MO &#187; Chicago, IL: 31-Dec-2011<br />
* Chicago, IL &#187; St. Louis, MO: 5-Jul-2011<br />
* Milwaukee, WI &#187; Chicago, IL: 30-Jun-2011<br />
* Pittsburgh, PA &#187; New York City, NY: April 2005 (round-trip)<br />
* Pittsburgh, PA &#187; Bethlehem, PA &#187; Westborough, MA &#187; New York City, NY: December 2004 (round-trip)<br />
* Pittsburgh, PA &#187; Boston, MA: November 2004 (round-trip)<br />
* Corvallis, OR &#187; Salt Lake City, UT &#187; Houston, TX &#187; Atlanta, GA &#187; Pittsburgh, PA: September 2004<br />
* Corvallis, OR &#187; Boston, MA: 2001, 2002 (round-trip)<br />
* Corvallis, OR &#187; Vancouver, BC, Canada (round-trip)<br />
* Corvallis, OR &#187; Tijuana, Mexico: 7-Sep-1999 (round-trip)<br />
* Los Angeles, CA &#187; Corvallis, OR: January 1998<br />
* Houston, TX &#187; Milwaukee, WI &#187; Menominee, MI: May 1995 (round-trip)<br />
<br />
== Bus / Train / Ferry ==<br />
===Spain trip (2006)===<br />
* Monaco &#187; Cannes &#187; Marseille &#187; Montpellier St-Ro &#187; Barcelona; April 2006 (round-trip)<br />
** 24-Apr-06 18h35: |&rarr; Nice, France [SNCF train]<br />
** 24-Apr-06 19h00: Antibes, FR<br />
** 24-Apr-06 19h07: Cannes, FR<br />
** 24-Apr-06 19h30: B. sur-Mer, FR<br />
** 24-Apr-06 19h39: San Raphael-Valescure, FR<br />
** 24-Apr-06 20h14: Les Arcs-Drag., FR<br />
** 24-Apr-06 20h56: Toulon, FR<br />
** 24-Apr-06 21h35: Marseille, FR<br />
** 25-Apr-06 15h05: |&rarr; Marseille, FR<br />
** 25-Apr-06 16h16: Nîmes, FR<br />
** 25-Apr-06 17h21: Montpellier St-Ro, FR<br />
** 25-Apr-06 18h42: Béziers, FR<br />
** 25-Apr-06 19h35: Perpignan, FR<br />
** 25-Apr-06 20h15: Portbou, Spain (ES) [''border'']<br />
** 25-Apr-06 22h30: Barcelona, ES<br />
** 27-Apr-06 19h24: |&rarr; Barcelona, ES [Renfe train]<br />
** 27-Apr-06 22h05: Cerbere, FR [''border'']<br />
** 28-Apr-06 08h37: Nice, FR<br />
** 28-Apr-06 10h00: Monaco<br />
<br />
===Miscellaneous (Europe)===<br />
* Tallinn, Estonia &rarr; Helsinki, Finland: January 2020 (round-trip)<br />
* Lisbon, Portugal &rarr; Porto, Portugal: Nov-2016 (round-trip)<br />
* København, DK &#187; Berlin, D: 09-Apr-2006 [+Ferry]<br />
* Berlin, D &#187; København, DK: 08-Apr-2006 (15h15) [+Ferry]<br />
* Ljubljana, Slovenia &#187; Villach HBF, Austria: 18-Aug-1997<br />
* Stockholm C &#187; Oslo S: 15-Aug-1997 (SJ train)<br />
* Salzburg, Austria &#187; Ljubljana, Slovenia: 25-Aug-1997 (&#214;sterreichische Bundesbahnen train (&#214;BB))<br />
* Haslev, DK &#187; Næstved, DK: 24-Aug-1997 (DSB train)<br />
* København &#187; Stockholm C: 14-Aug-1997 (DSB train)<br />
* Oslo S &#187; Bergen: 16-Aug-1997<br />
* Næstved, DK &#187; Rødby Færge, DK: 24-Aug-1997<br />
* Salzburg HBF &#187; Villach HBF (&uuml;ber Schwarzach-St. veit Bad Gastein): 25-Aug-1997 (&#214;BB train)<br />
* Oslo S &#187; Trondheim: 18-Aug-1997<br />
* Grensen (Scandinavia): 16-Aug-1997<br />
* Abisko Turiststation - STF: 20-Aug-1997<br />
* Abisko Turiststation - STF: 21-Aug-1997<br />
* Germany: 24-Aug-1997 (DB train)<br />
* Stockholm S:T Eriksgatan: 15-Aug-1997<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Jun-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Mar-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: (28-Nov-1997/30-Nov-1997) (round-trip)<br />
* Budapest, Hungary &rarr; Ljubljana, Slovenia: 8-Nov-1996<br />
* Budapest, Hungary &rarr; Slovakia: 18-Aug-1995 (round-trip)<br />
* Budapest, Hungary &rarr; Vienna, Austria: 9-Feb-1995 (round-trip)<br />
* Moscow, Russia &rarr; Warsaw, Poland: Sep-1994<br />
* Moscow, Russia &rarr; Brest, Belarus: Aug-1994 (round-trip)<br />
* Moscow, Russia &rarr; Minsk, Belarus: Jul-1994 (round-trip)<br />
* Warsaw, Poland &#187; Moscow, Russia: Jun-1994<br />
* Warsaw, Poland &rarr; Vilnius, Lithuania &rarr; Riga, Latvia: (12-Jan-1994/??-Jan-1994) (round-trip)<br />
<br />
===Miscellaneous (South America)===<br />
* Arequipa, Peru &rarr; Lima, Peru: 1992<br />
* Arequipa, Peru &rarr; Iquique, Chile: (17-Jul-1992/20-Jul-1992) (round-trip)<br />
* Lima, Peru &rarr; Arequipa, Peru: 1992<br />
* Lima, Peru &rarr; La Paz, Bolivia: (19-May-1991/6-Jun-1991) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (29-Nov-1990/11-Dec-1990) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (6-Jul-1990/20-Jul-1990) (round-trip)<br />
<br />
==Flights==<br />
* Seattle, WA (SEA) ✈ New York City, NY (JFK): October 2023 [RT] {~5-6 hours x 2}<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX): March 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): February 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2022 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): August 2022 [RT]<br />
* Kyiv, Ukraine (KBP) ✈ Frankfurt, Germany (FRA) ✈ Seattle, WA (SEA): December 2021<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ Frankfurt, Germany (FRA) ✈ Kyiv, Ukraine (KBP): December 2021<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2021 [RT]<br />
* Memphis, TN (MEM) ✈ Atlanta, GA (ATL) ✈ Seattle, WA (SEA): June 2021<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC) ✈ Memphis, TN (MEM): June 2021<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): May 2021 [RT]<br />
* Tallinn, Estonia (TLL) ✈ Stockholm, Sweden (ARN) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): January 2020<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ København, DK (CPH) ✈ Helsinki, Finland (HEL) ✈ Tallinn, Estonia (TLL): December 2019<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): October 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Miami, FL (MIA): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): August 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Denver, CO (DEN): May 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Charlotte, NC (CLT): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Santa Ana, CA (SNA): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): September 2018 [RT]<br />
* Budapest, Hungary (BUD) ✈ Brussels, Belgium (BRU) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): July 2018<br />
* Seattle, WA (SEA) ✈ Toronto, Canada (YYZ) ✈ Budapest, Hungary (BUD): June 2018<br />
* Seattle, WA (SEA) ✈ Reno, NV (RNO): May 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Reykjavík, Iceland (RKV): December 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Kona, Hawaii (KOA): September 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC): August 2017 [RT]<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): November 2016<br />
* Lisbon, Portugal ✈ Amsterdam, NL (AMS): November 2016<br />
* Paris, FR (CGD) ✈ Lisbon, Portugal: November 2016<br />
* Seattle, WA (SEA) ✈ Paris, FR (CDG): November 2016<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): November 2016 [RT]<br />
* Seattle, WA (SEA) ✈ Las Vegas, NV (LAS): June 2016 [RT]<br />
* Houston, TX (IAH) ✈ Seattle, WA (SEA): September 2015 [RT]<br />
* Houston, TX (IAH) ✈ San Francisco, CA (SFO): August 2015 [RT]<br />
* Houston, TX (IAH) ✈ Madison, WI (MSN): March 2015 [RT]<br />
* Houston, TX (IAH) ✈ Amsterdam, NL (AMS): March 2015 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee (MKE): June 2011<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX) ✈ Chicago, IL (ORD): October 2010 [RT]<br />
* Seattle, WA (SEA) ✈ Los Angeles, CA (LAX): December 2007 [RT]<br />
* København, DK (CPH) ✈ Seattle, WA (SEA): June 2006<br />
* Heathrow, UK ✈ København, DK (CPH): June 2006<br />
* Nice, FR ✈ Heathrow, UK: June 2006<br />
* København, DK (CPH) ✈ Nice, FR (NCE): February 2006<br />
* Washington Dulles ✈ København, DK: August 2005<br />
* Pittsburgh, PA (PIT) ✈ Washington Dulles: August 2005<br />
* Portland, OR (PDX) ✈ Pittsburgh, PA (PIT): Summer 2004 [RT]<br />
* Eugene, OR ✈ Houston, TX (IAH): February 2002 [RT]<br />
* Portland, OR (PDX) ✈ Boston, MA: December 2002 [RT]<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): January 2000<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): January 2000<br />
* Dublin, Ireland ✈ Amsterdam, NL (AMS): January 2000<br />
* Amsterdam (AMS) ✈ Dublin, Ireland: December 1999<br />
* Seattle, WA (SEA) ✈ Amsterdam, NL (AMS): December 1999<br />
* Portland, OR (PDX) ✈ Seattle, WA (SEA): December 1999<br />
* Chicago (ORD) ✈ Los Angeles (LAX): December 1997<br />
* Greenbay, WI (GRB) ✈ Chicago (ORD): December 1997<br />
* Chicago (ORD) ✈ Greenbay, WI (GRB): December 1997<br />
* Rome, Italy (FCO) ✈ Chicago, IL (ORD): December 1997<br />
* Trieste, Italy (TRS) ✈ Rome, Italy (FCO): December 1997<br />
* Houston, TX (IAH) ✈ Budapest, Hungary (BUD): July 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: June 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: March 1996 [RT]<br />
* Narita, Japan ✈ Taipei, Taiwan: December 1995 [RT]<br />
* Los Angeles, CA (LAX) ✈ Narita, Japan: October 1995<br />
* Houston, TX (IAH) ✈ Los Angeles (LAX): October 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): September 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): May 1995 [RT]<br />
* Paris, FR (CGD) ✈ Vienna, Austria: September 1993<br />
* Quito, Ecuador ✈ Caracas, Venezuela (CCS) ✈ Paris, France: 1993<br />
* Lima, Peru ✈ Tumbes, Peru: December 1992<br />
* Boston, MA ✈ Miami, FL ✈ Lima, Peru: <br />
* Amsterdam, NL (AMS) ✈ Chicago, IL (ORD): <br />
* Boston, MA ✈ Amsterdam, NL (AMS):<br />
<br />
== Individual Places ==<br />
=== Ireland ===<br />
* Dublin<br />
** '''Dublin''' (Baile &Ntilde;tha Cliath)<br />
* Kildare<br />
** Naas<br />
* Laois<br />
* Carlow<br />
** Carlow (Ceatharlach)<br />
** Royal Oak<br />
* Kilkenny<br />
** '''Kilkenny''' (Cill Chainnigh)<br />
** Callan<br />
* Tipperary<br />
** Glenbower<br />
** Clonmel (Cluian Meala)<br />
** Cahir<br />
** Burncourt<br />
* Cork<br />
** Fermoy<br />
** '''Cork''' (Coroaigh)<br />
** Fota<br />
** Cobh (An C&oacute;bh)<br />
** '''Blarney'''<br />
** Macroom<br />
** Ballyvourney<br />
* Kerr<br />
** ''Derrynasaggart Mts''<br />
** Poulgorm Br<br />
** '''Killarney''' (Cill Airne)<br />
** Farranfore<br />
* Limerick<br />
** Abbeyfeale<br />
** ''Mullaghareirk Mts''<br />
** Newcastle West<br />
** Croagh<br />
** '''Limerick''' (Luimneach)<br />
* Clare<br />
** Bunratty<br />
** Ennis (Inis)<br />
** Ennistymon<br />
** Liscannor<br />
** ''Cliffs of Moher''<br />
** Doolin<br />
** Lisdoonvarna<br />
** Ballyvaughan<br />
** Bealaclugga<br />
** Burren<br />
* Galway<br />
** Kinvarra<br />
** Ballinderreen<br />
** Oranmore<br />
** '''Galway''' (Gaillimh)<br />
** Claregalway<br />
** Tuam<br />
* Mayo<br />
** Claremorris<br />
** Cloonfallagh<br />
** Charlestown<br />
* Sligo<br />
** Curry<br />
** Tubbercurry<br />
** Collooney<br />
** '''Sligo''' (Sligeach)<br />
** ''Dartry Mts''<br />
* Leitrim<br />
* Donegal<br />
** Bundoran<br />
** Ballyshannon<br />
** Donegal (D&uacute;n na nGall)<br />
** Ballybofey<br />
** Clady<br />
* Tyrone<br />
** '''Strabane''' (Northern Ireland)<br />
* Londonderry<br />
** Derry (Londonderry)<br />
** Eglinton<br />
** Ballykelly<br />
** Limavady<br />
** Coleraine<br />
* Antrim<br />
** Derrykelghan<br />
** Moss-side<br />
** Ballycastle<br />
** ''Antrim Hills''<br />
** Ballintoy<br />
** ''Carrick-a-Rede Rope Bridge''<br />
** ''Giants Causeway''<br />
** Craignamaddy<br />
** Ballymoney<br />
** Ballymena<br />
** Antrim<br />
** ''Lough Neagh'' (lake)<br />
** Dunadry<br />
** Newtownabbey<br />
** '''Belfast'''<br />
* Down<br />
** Lisburn<br />
** Banbridge<br />
* Armagh<br />
** Newry<br />
* Louth<br />
** Dundalk (Dun Dealgan)<br />
** Dunleen<br />
** Drogheda (Droichead Atha)<br />
* Meath<br />
** Julianstown<br />
* Dublin<br />
** Balbriggan<br />
** Swords<br />
<br />
[[Category:World Travels]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Yq&diff=8281Yq2023-09-28T19:43:22Z<p>Christoph: Created page with "'''yq''' is a lightweight and portable command-line YAML, JSON and XML processor. yq uses jq like syntax but works with YAML files as well as JSON, XML, properties, csv an..."</p>
<hr />
<div>'''yq''' is a lightweight and portable command-line YAML, JSON and XML processor. yq uses [[jq]] like syntax but works with YAML files as well as JSON, XML, properties, csv and tsv.<br />
<br />
==Examples==<br />
<br />
<pre><br />
$ yq -i '.spec.replicas=1' deployment/nginx/deploy.yml<br />
$ yq -i '.spec.template.spec.containers[0].ports[0].containerPort=8080' deployment/nginx/deploy.yml<br />
$ yq -i 'del(.resources[2])' deployment/nginx/kustomization.yml<br />
</pre><br />
<br />
==External links==<br />
* [https://github.com/mikefarah/yq Official website]<br />
<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Books&diff=8280Category:Books2023-08-27T18:08:40Z<p>Christoph: /* Titles (completed) */</p>
<hr />
<div>My love of books runs deep. I try to read for at least an hour every day (books unrelated to my studies). This category will contain a list of the books I have read or [[Summer Reading List|am reading]].<br />
<br />
==Titles (completed)==<br />
''Note: These are a list of books I have read in their entirety. This is nowhere near a complete list and the following list is in no particular order.''<br />
<br />
#'''''From Dawn to Decadence: 1500 to the Present: 500 Years of Western Cultural Life''''' &mdash; by Jacques Barzun<br />
#'''''The Invention of Science: The Scientific Revolution from 1500 to 1750''''' &mdash; by David Wootton<br />
#'''''Predictably Irrational: The Hidden Forces That Shape Our Decisions''''' &mdash; by Dan Ariely (2008)<br />
#'''''The Tyranny of Experts: Economists, Dictators, and the Forgotten Rights of the Poor''''' &mdash; by William Easterly<br />
#'''''The Origins of Political Order: From Prehuman Times to the French Revolution''''' &mdash; by Francis Fukuyama<br />
#'''''Political Order and Political Decay: From the Industrial Revolution to the Globalization of Democracy''''' &mdash; by Francis Fukuyama<br />
#'''''Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World''''' &mdash; by Bruce Schneier<br />
#'''''Superintelligence: Paths, Dangers, Strategies''''' &mdash; by Nick Bostrom<br />
#'''''Smashing Physics''''' &mdash; by Jon Butterworth<br />
#'''''The History of the Ancient World: From the Earliest Accounts to the Fall of Rome''''' &mdash; by Susan Wise Bauer<br />
#'''''The History of the Medieval World: From the Conversion of Constantine to the First Crusade''''' &mdash; by Susan Wise Bauer<br />
#'''''The History of the Renaissance World: From the Rediscovery of Aristotle to the Conquest of Constantinople''''' &mdash; by Susan Wise Bauer<br />
#'''''The Well Educated Mind: A Guide to the Classical Education You Never Had''''' &mdash; by Susan Wise Bauer<br />
#'''''The Story of Western Science: From the Writings of Aristotle to the Big Bang Theory''''' &mdash; by Susan Wise Bauer (2015)<br />
#'''''Countdown to Zero Day''''' &mdash; by Kim Zetter<br />
#'''''The Revenge of Geography''''' &mdash; by Robert D. Kaplan<br />
#'''''The Master of Disguise''''' &mdash; by Antonio J. Mendez<br />
#'''''To Explain the World: The Discovery of Modern Science''''' &mdash; by Steven Weinberg (2015)<br />
#'''''The Fall of the Roman Empire''''' &mdash; by Peter Heather<br />
#'''''The Shadow Factory''''' &mdash; by James Bamford<br />
#'''''Operation Shakespeare''''' &mdash; by John Shiffman<br />
#'''''No Place to Hide''''' &mdash; by Glenn Greenwald<br />
#'''''Neanderthal Man: In Search of Lost Genomes''''' &mdash; by Svante Pääbo (2014)<br />
#'''''Constantine the Emperor''''' &mdash; by David Potter<br />
#'''''A Troublesome Inheritance''''' &mdash; by Nicholas Wade<br />
#'''''The Selfish Gene''''' &mdash; by Richard Dawkins<br />
#'''''The 4-Hour Workweek: Escape 9-5, Live Anywhere, and Join the New Rich''''' &mdash; by [http://www.fourhourworkweek.com/blog/about/ Timothy Ferriss] (2007)<br />
#'''''Hackers: Heroes of the Computer Revolution''''' &mdash; by Steven Levy<br />
#'''''Wealth, Poverty, and Politics: An International Perspective''''' &mdash; Thomas Sowell<br />
#'''''The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win''''' &mdash; by Gene Kim, Kevin Behr, George Spafford<br />
#'''''Paper: Paging Through History''''' &mdash; by Mark Kurlansky<br />
#'''''Salt: A World History''''' &mdash; by Mark Kurlansky<br />
#'''''Guns, Germs, and Steel: The Fates of Human Societies''''' &mdash; by Jared Diamond (1997)<br />
#'''''Collapse: How Societies Choose to Fail or Succeed''''' &mdash; by Jared Diamond (2005)<br />
#'''''The Better Angels of Our Nature: Why Violence Has Declined''''' &mdash; by Steven Pinker<br />
#'''''How to Win Friends & Influence People''''' &mdash; by Dale Carnegie (1936)<br />
#'''''[[The True Believer: Thoughts on the Nature of Mass Movements]]''''' &mdash; Eric Hoffer (1951)<br />
#'''''An Economic History of the World since 1400''''' &mdash; by Professor Donald J. Harreld<br />
#'''''The End of the Cold War 1985-1991''''' &mdash; by Robert Service<br />
#'''''Iron Kingdom: The Rise and Downfall of Prussia, 1600-1947''''' &mdash; by Christopher Clark<br />
#'''''[https://www.goodreads.com/book/show/12158480-why-nations-fail Why Nations Fail: The Origins of Power, Prosperity, and Poverty]''''' &mdash; by Daron Acemoğlu and James A. Robinson (2012)<br />
#'''''The Six Wives of Henry VIII''''' &mdash; by Alison Weir (1991)<br />
#'''''The Demon-Haunted World: Science as a Candle in the Dark''''' &mdash; by Carl Sagan (1996)<br />
#'''''Dark Territory: The Secret History of Cyber War''''' &mdash; by Fred Kaplan (2016)<br />
#'''''A Brief History of Britain 1066-1485''''' &mdash; by Nicholas Vincent (2012)<br />
#'''''The History of Science: 1700-1900''''' &mdash; by Professor Frederick Gregory (2003)<br />
#'''''Heart of Europe: A History of the Holy Roman Empire''''' &mdash; by Peter H. Wilson (2016)<br />
#'''''[[The Story of Civilization]] - Volume 2: The Life of Greece''''' &mdash; by Will Durant (1939)<br />
#'''''The Story of Civilization - Volume 3: Caesar and Christ''''' &mdash; by Will Durant (1944)<br />
#'''''The Story of Civilization - Volume 4: The Age of Faith''''' &mdash; by Will Durant (1950)<br />
#'''''Red Sparrow''''' &mdash; by Jason Matthews (2013)<br />
#'''''Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time''''' &mdash; by Dava Sobel (1995)<br />
#'''''The Medici: Power, Money, and Ambition in the Italian Renaissance''''' &mdash; by Paul Strathern (2016)<br />
#'''''The Venetians: A New History: From Marco Polo to Casanova''''' &mdash; by Paul Strathern (2013)<br />
#'''''The Rise of Athens: The Story of the World's Greatest Civilization''''' &mdash; by Anthony Everitt (2016)<br />
#'''''Red Mars''''' &mdash; by Kim Stanley Robinson (1993)<br />
#'''''The Clockwork Universe: Isaac Newton, The Royal Society, and the Birth of the Modern World''''' &mdash; by Edward Dolnick (2011)<br />
#'''''The Skeptics' Guide to the Universe: How to Know What's Really Real in a World Increasingly Full of Fake''''' &mdash; by Steven Novella (2018)<br />
#'''''New Thinking: From Einstein to Artificial Intelligence, the Science and Technology That Transformed Our World''''' &mdash; by Dagogo Altraide (2019)<br />
#'''''Flashpoints: The Emerging Crisis in Europe''''' &mdash; by George Friedman (2015)<br />
#'''''The War on Science: Who's Waging It, Why It Matters, What We Can Do About It''''' &mdash; by Shawn Lawrence Otto (2016)<br />
#'''''Permanent Record''''' &mdash; by Edward Snowden (2019)<br />
#'''''Mythos: The Greek Myths Reimagined''''' &mdash; by Stephen Fry (2019)<br />
#'''''Heroes: The Greek Myths Reimagined''''' &mdash; by Stephen Fry (2020)<br />
#'''''Troy: The Greek Myths Reimagined''''' &mdash; by Stephen Fry (2021)<br />
#'''''I Contain Multitudes: The Microbes Within Us and a Grander View of Life''''' &mdash; by Ed Yong (2016)<br />
#'''''How to Read a Book''''' &mdash; by Mortimer J. Adler and Charles Van Doren (1940)<br />
#'''''The Order: A Novel''''' &mdash; by Daniel Silva (2020)<br />
#'''''How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need''''' &mdash; by Bill Gates (2020)<br />
#'''''The Horse, the Wheel, and Language: How Bronze-Age Riders from the Eurasian Steppes Shaped the Modern World''''' &mdash; by David W. Anthony (2007)<br />
#'''''The Map of Knowledge: A Thousand-Year History of How Classical Ideas Were Lost and Found''''' &mdash; by Violet Moller (2019)<br />
#'''''Sapiens: A Brief History of Humankind''''' &mdash; by Yuval Noah Harari (2015)<br />
#'''''The Ascent of Money: A Financial History of the World''''' &mdash; by Niall Ferguson (2008)<br />
#'''''Civilization: The West and the Rest''''' &mdash; by Niall Ferguson (2011)<br />
#'''''Empire: How Britain Made the Modern World''''' &mdash; by Niall Ferguson (2017)<br />
#'''''The Square and the Tower: Networks and Power, from the Freemasons to Facebook''''' &mdash; by Niall Ferguson (2018)<br />
#'''''The House of Rothschild, Volume 1: Money's Prophets: 1798-1848''''' &mdash; by Niall Ferguson (2019)<br />
#'''''Doom: The Politics of Catastrophe''''' &mdash; by Niall Ferguson (2021)<br />
#'''''The Accidental Superpower: The Next Generation of American Preeminence and the Coming Global Disorder''''' &mdash; by Peter Zeihan (2014)<br />
#'''''The Strange Death of Europe: Immigration, Identity, Islam''''' &mdash; by Douglas Murray (2017)<br />
#'''''The War on the West''''' &mdash; by Douglas Murray (2022)<br />
#'''''12 Rules for Life: An Antidote to Chaos''''' &mdash; by Jordan B. Peterson (2018)<br />
#'''''The Historian''''' &mdash; by Elizabeth Kostova (2009)<br />
#'''''The Battle of Bretton Woods: John Maynard Keynes, Harry Dexter White, and the Making of a New World Order''''' &mdash; by Benn Steil (2013)<br />
#'''''The Gates of Europe: A History of Ukraine''''' &mdash; by Serhii Plokhy (2015)<br />
#'''''Children of Ash and Elm: A History of the Vikings''''' &mdash; by Neil Price (2020)<br />
<br />
==Titles (textbooks)==<br />
''Note: These are some of the textbooks I not only read in their entirety whilst in university, but studied them thoroughly. This is very much an incomplete list.''<br />
<br />
#'''''X-ray Structure Determination''''' &mdash; by Stout and Jensen<br />
#'''''Inferring Phylogenies''''' &mdash; by Joseph Felsenstein, Sinauer Associates, Inc. (2003)<br />
#'''''A Biologist's Guide to Analysis of DNA Microarray Data'''''<br />
#'''''Molecular Cell Biology''''' &mdash; by Scott MP, Matsudaira P, Lodish H, Darnell J, Zipursky L, Kaiser CA, Berk A, and Krieger M. W. H. Freeman, 5th Edition (2003)<br />
#'''''Guide to Analysis of DNA Microarray Data''''' &mdash; by Knudsen S, 2nd Edition (2004)<br />
#'''''General Chemistry''''' &mdash; by Darrell D. Ebbing and Steven D. Gammon, Houghton Mifflin Company, Boston, 6th Edition (1999)<br />
#'''''Organic Chemistry''''' &mdash; by Paula Yurkanis Bruice, Prentice Hall, New Jersey, 3rd Edition (2001)<br />
#'''''Principles and Techniques for an Integrated Chemistry Laboratory''''' &mdash; by David A. Aikens, ''et. al.'', Waveland Press, Inc., Prospect Heights (1984)<br />
#'''''Physical Chemistry''''' &mdash; by Peter Atkins and Julio de Paula, W.H. Freeman and Company, New York, 7th Edition (2002)<br />
#'''''Biochemistry''''' &mdash; by Christopher K. Mathews, K. E. van Holde, and Kevin G. Ahern, Addison Wesley Longman, San Fransisco, 3rd Edition (2000)<br />
#'''''Biology''''' &mdash; by Neil A. Campbell, The Benjamin/Cummings Publishing Company, Inc., Redwood City, 5th Edition (1999)<br />
#'''''Essential Cell Biology''''' &mdash; by Bruce Alberts, ''et. al.'', Garland Publishing, Inc. New York (1998)<br />
#'''''Genetics: From Genes to Genomes''''' &mdash; by Leland H. Hartwell, ''et. al.'', McGraw-Hill Companies, Inc. Boston (2000)<br />
#'''''Evolution: An Introduction''''' &mdash; by Stephen C. Stearns and Rolf F. Hoekstra, Oxford University Press, Oxford (2000)<br />
#'''''Physics for Scientists and Engineers''''' &mdash; by Saunders College Publishing, Philadelphia, 5th Edition (2000)<br />
#'''''Physical Biochemistry''''' &mdash; by Kensal E. van Holde, W. Curtis Johnson, and P. Shing Ho, Prentice Hall, New Jersey (1998)<br />
#'''''Object-Oriented Software Development Using Java''''' &mdash; by Xiaoping Jia, Addison-Wesley, 2nd Edition<br />
#'''''Calculus''''' &mdash; by James Stewart<br />
#'''''Calculus: Early Transcendentals''''' &mdash; by James Stewart<br />
#'''''Single Variable Calculus: Early Transcendentals''''' &mdash; by James Stewart<br />
<br />
==Titles (uncategorized)==<br />
''Note: These are some of my favourite books that I have read. I have read others, but these stood out to me. This does not mean, in any way, that I necessarily agree with everything these books have to say; they just interested me.''<br />
#'''''The History of the Decline and Fall of the Roman Empire''''' &mdash; by Edward Gibbon (1776-1788) [http://www.gutenberg.org/browse/authors/g#a375][http://en.wikipedia.org/wiki/Outline_of_The_History_of_the_Decline_and_Fall_of_the_Roman_Empire]<br />
#'''''The House of Intellect''''' &mdash; by Jacques Barzun<br />
#'''''[http://librivox.org/thus-spake-zarathustra-by-friedrich-nietzsche/ Also sprach Zarathustra]''''' ("Thus Spoke Zarathustra") &mdash; by Friedrich Nietzsche (1883-5)<br />
#'''''Jenseits von Gut und Böse''''' ("Beyond Good and Evil") &mdash; by Friedrich Nietzsche (1886)<br />
#'''''Zur Genealogie der Moral''''' ("On the Genealogy of Morals") &mdash; by Friedrich Nietzsche (1887)<br />
#'''''Götzen-Dämmerung''''' ("Twilight of the Idols") &mdash; by Friedrich Nietzsche (1888)<br />
#'''''[http://librivox.org/the-antichrist-by-nietzsche/ Der Antichrist]''''' ("The Antichrist") &mdash; by Friedrich Nietzsche (1888)<br />
#'''''Ecce Homo''''' &mdash; by Friedrich Nietzsche (1888)<br />
#'''''Vom Nutzen und Nachtheil der Historie für das Leben '''''("On the Use and Abuse of History for Life") &mdash; by Friedrich Nietzsche (1874)<br />
#'''''Die Traumdeutung''''' ("The Interpretation of Dreams") &mdash; by Sigmund Freud (1899)<br />
#'''''Das Ich und das Es''''' ("The Ego and the Id") &mdash; by Sigmund Freud (1923)<br />
#'''''Die Zukunft einer Illusion''''' ("The Future of an Illusion") &mdash; by Sigmund Freud (1927) <br />
#'''''Das Unbehagen in der Kultur''''' ("Civilization and Its Discontents") &mdash; by Sigmund Freud (1929)<br />
#'''''[[:wikipedia:A History of the English-Speaking Peoples|A History of the English-Speaking Peoples]]''''' &mdash; by Winston Churchill (1956–58)<br />
#'''''The Notebooks of Don Rigoberto''''' &mdash; by Mario Vargas Llosa<br />
#'''''Die Waffen nieder!''''' ("Lay Down Your Arms!") &mdash; Baroness Bertha von Suttner (1889)<br />
#'''''Europe's Optical Illusion''''' (also: "The Great Illusion") &mdash; Sir Norman Angell (1909)<br />
#'''''Night''''' &mdash; by Elie Wiesel (1960)<br />
#'''''The End of Faith: Religion, Terror, and the Future of Reason''''' &mdash; by Sam Harris<br />
#'''''The Lexus and the Olive Tree: Understanding Globalization''''' &mdash; by Thomas L. Friedman<br />
#'''''The World Is Flat: A Brief History of the Twenty-first Century''''' &mdash; Thomas L. Friedman<br />
#'''''The Case For Goliath: How America Acts As The World's Government in the Twenty-first Century''''' &mdash; by Michael Mandelbaum<br />
#'''''Caesar's Commentaries: On the Gallic War And on the Civil War''''' &mdash; by Julius Caesar<br />
#'''''Cem Escovadas Antes de Ir para Cama''''' ("One Hundred Strokes of the Brush before Bed") &mdash; by Melissa Panarello<br />
#'''''Coryat's Crudities: Hastily gobled up in Five Moneth's Travels''''' &mdash; by Thomas Coryat (1611)<br />
#'''''Italian Hours''''' &mdash; by Henry James (1909)<br />
#'''''Italienische Reise''''' ("Italian Journey") &mdash; by Johann Wolfgang von Goethe (1816/1817).<br />
#'''''Diarios de motocicleta''''' ("The Motorcycle Diaries") &mdash; by Che Guevara (1951).<br />
#'''''The Prince of Tides''''' &mdash; by Pat Conroy (1986).<br />
#'''''Il Nome Della Rosa''''' ("The Name of the Rose") &mdash; by Umberto Eco (1980).<br />
#'''''Il Pendolo di Foucault''''' ("Foucault's Pendulum") &mdash; by Umberto Eco (1988).<br />
#'''''The Book of the Courtier''''' ("Il Cortegiano") &mdash; by Baldassare Castiglione (1528) [http://en.wikipedia.org/wiki/Sprezzatura].<br />
#'''''One Hundred Years of Solitude''''' &mdash; by Gabriel Garcia Marquez<br />
#'''''The Unbearable Lightness of Being: A Novel''''' &mdash; by Milan Kundera<br />
#'''''The Book of Laughter and Forgetting''''' &mdash; by Milan Kundera<br />
#'''''Masters of Rome''''' (series) &mdash; by Colleen McCullough<br />
#'''''The Wishing Game''''' &mdash; by Patrick Redmond<br />
#'''''The Measure Of All Things: The Seven-Year Odyssey and Hidden Error That Transformed the World''''' &mdash; by By Ken Alder (2002)<br />
#'''''De la démocratie en Amérique''''' ("On Democracy in America") &mdash; by Alexis de Tocqueville (1835)<br />
#'''''The Anatomy of Revolution''''' &mdash; by Crane Brinton (1938)<br />
#'''''God and Gold: Britain, America, and the Making of the Modern World''''' &mdash; by Walter Russell Mead (2007)<br />
#'''''Black Mass: Apocalyptic Religion and the Death of Utopia''''' &mdash; by John Gray (2007)<br />
#'''''The Grand Chessboard: American Primacy and Its Geostrategic Imperatives''''' &mdash; by Zbigniew Brzezinski (1998)<br />
#'''''Kim''''' &mdash; by Rudyard Kipling (1901)<br />
#'''''The Lotus and the Wind''''' &mdash; by John Masters<br />
<br />
==Authors (uncategorized)==<br />
*[[wikipedia:Aldous Huxley|Aldous Huxley]] &mdash; [[Wikiquote:Aldous Huxley]]<br />
*[[wikipedia:Edgar Allen Poe|Edgar Allen Poe]] &mdash; [[Wikiquote:Edgar Allen Poe]]<br />
*[[wikipedia:Oscar Wilde|Oscar Wilde]] &mdash; [[Wikiquote:Oscar Wilde]]<br />
*[[wikipedia:George Orwell|George Orwell]] &mdash; [[Wikiquote:George Orwell]]<br />
*[[wikipedia:William Shakespeare|William Shakespeare]] &mdash; [[Wikiquote:William Shakespeare]]<br />
*[[wikipedia:Thomas Jefferson|Thomas Jefferson]] &mdash; [[Wikiquote:Thomas Jefferson]]<br />
*[[wikipedia:Mark Antony|Mark Antony]] &mdash; [[Wikiquote:Mark Antony]]<br />
*[[wikipedia:Jane Austen|Jane Austen]] &mdash; [[Wikiquote:Jane Austen]] ([http://en.wikipedia.org/wiki/Free_indirect_speech])<br />
*[[wikipedia:Albert Einstein|Albert Einstein]] &mdash; [[Wikiquote:Albert Einstein]]<br />
*[[Friedrich Nietzsche]] &mdash; [[Wikiquote:Friedrich Nietzsche]]<br />
*[[wikipedia:Sigmund Freud|Sigmund Freud]] &mdash; [[Wikiquote:Sigmund Freud]]<br />
*[[wikipedia:Plato|Plato]] &mdash; [[Wikiquote:Plato]]<br />
*[[wikipedia:Aristotle|Aristotle]] &mdash; [[Wikiquote:Aristotle]]<br />
*[[wikipedia:Baruch Spinoza|Baruch Spinoza]] (Benedictus de Spinoza; 1632–1677) &mdash; [[Wikiquote:Baruch Spinoza]]<br />
*[[wikipedia:Georg Wilhelm Friedrich Hegel|Georg Wilhelm Friedrich Hegel]] &mdash; [[Wikiquote:Georg Wilhelm Friedrich Hegel]]<br />
*[[wikipedia:Niccolò Machiavelli|Niccolò Machiavelli]] &mdash; [[Wikiquote:Niccolò Machiavelli]]<br />
*[[wikipedia:Immanuel Kant|Immanuel Kant]] &mdash; [[Wikiquote:Immanuel Kant]]<br />
*[[wikipedia:Lord Byron|Lord Byron]] (George Gordon Byron, 6th Baron Byron) &mdash; [[Wikiquote:Lord Byron]]<br />
*[[wikipedia:Mary Shelley|Mary Shelley]] &mdash; [[Wikiquote:Mary Shelley]]<br />
*[[wikipedia:Percy Bysshe Shelley|Percy Bysshe Shelley]] &mdash; [[Wikiquote:Percy Bysshe Shelley]]<br />
*[[wikipedia:Christopher Marlowe|Christopher Marlowe]] (1564–1593): English dramatist and poet. &mdash; [[Wikiquote:Christopher Marlowe]]<br />
*[[wikipedia:Francis Bacon|Francis Bacon]] &mdash; [[Wikiquote:Francis Bacon]]<br />
*[[wikipedia:Eric Hoffer|Eric Hoffer]] &mdash; [[Wikiquote:Eric Hoffer]]<br />
*[[wikipedia:Milton Friedman|Milton Friedman]] &mdash; [[Wikiquote:Milton Friedman]]<br />
*[[wikipedia:Roger Bacon|Roger Bacon]] (c. 1214-1294) &mdash; [[wikiquote:Roger Bacon]]<br />
*[[wikipedia:Charles Baudelaire|Charles Baudelaire]] (1821-1867) &mdash; [[wikiquote:Charles Baudelaire]]<br />
<br />
=== Authors (I have not read yet) ===<br />
* [[wikipedia:Simone De Beauvoir|Simone De Beauvoir]] (1908–1986): French existentialist, writer, and social essayist. (Author of ''The Necessity of Atheism'' [http://www.spartacus.schoolnet.co.uk/PRshelley.htm].)<br />
* [[wikipedia:Jeremy Bentham|Jeremy Bentham]] (1748–1832): British jurist, eccentric, philosopher and social reformer, founder of utilitarianism. He had [[wikipedia:John Stuart Mill|John Stuart Mill]] as his disciple. (Quoted as saying "The spirit of dogmatic theology poisons anything it touches". ~ [http://www.positiveatheism.org/hist/quotes/quote-b0.htm].)<br />
* [[wikipedia:Albert Camus|Albert Camus]] (1913–1960): French philosopher and novelist, a luminary of existentialism.<br />
* [[wikipedia:Auguste Comte|Auguste Comte]] (1798–1857): French philosopher, considered the father of sociology. (Quoted as saying "The heavens declare the glory of Kepler and Newton". ~ [http://www.positiveatheism.org/hist/quotes/quote-c3.htm].)<br />
* [[wikipedia:André Comte-Sponville|André Comte-Sponville]] (1952–): French materialist philosopher.<br />
* [[wikipedia:Baron d'Holbach|Paul Henry Thiry, Baron d'Holbach]] (1723–1789): French homme de lettres, philosopher and encyclopedist, member of the philosophical movement of French materialism, attacked Christianity and religion as counter to the moral advancement of humanity.<br />
* [[wikipedia:Marquis de Condorcet|Marquis de Condorcet]] (1743–1794): French philosopher and mathematician of the Enlightenment.<br />
* [[wikipedia:Daniel Dennett|Daniel Dennett]] (1942–): American philosopher, leading figure in evolutionary biology and cognitive science, well-known for his book ''[[wikipedia:Darwin's Dangerous Idea|Darwin's Dangerous Idea]]''.<br />
* [[wikipedia:Denis Diderot|Denis Diderot]] (1713–1784): French philosopher, author, editor of the first encyclopedia. Known for the quote "Man will never be free until the last king is strangled with the entrails of the last priest".<br />
* [[wikipedia:Ludwig Andreas Feuerbach|Ludwig Andreas Feuerbach]] (1804–1872): German philosopher, postulated that God is merely a projection by humans of their own best qualities.<br />
* [[wikipedia:Paul Kurtz|Paul Kurtz]] (1926–): American philosopher, skeptic, founder of Committee for the Scientific Investigation of Claims of the Paranormal (CSICOP) and the Council for Secular Humanism.<br />
* [[wikipedia:Karl Popper|Sir Karl Popper]] (1902–1994): Austrian-born British philosopher of science, who claimed that empirical falsifiability should be the criterion for distinguishing scientific theory from non-science.<br />
* [[wikipedia:Richard Rorty|Richard Rorty]] (1931–): American philosopher, whose ideas combine pragmatism with a [[wikipedia:Ludwig Wittgenstein|Wittgensteinian]] ontology that declares that meaning is a social-linguistic product of dialogue. He actually rejects the theist/atheist dichotomy and prefers to call himself "anti-clerical".<br />
* [[wikipedia:Bertrand Russell|Bertrand Russell, 3rd Earl Russell]], (1872–1970): British mathematician, philosopher, logician, political liberal, activist, popularizer of philosophy, and 1950 Nobel Laureate in Literature. On the issue of atheism/agnosticism, he wrote the essay "[[wikipedia:Why I Am Not a Christian|Why I Am Not a Christian]]".<br />
* [[wikipedia:Jean-Paul Sartre|Jean-Paul Sartre]] (1905–1980): French existentialist philosopher, dramatist, novelist and critic.<br />
* [[wikipedia:Peter Singer|Peter Singer]] (1946–): Australian philosopher and teacher, working on practical ethics from a utilitarian perspective, controversial for his opinions on abortion and euthanasia.<br />
* [[wikipedia:James Lovelock|James Lovelock]] (1919-) [[wikiquote:James Lovelock]]<br />
<br />
==External links==<br />
*[http://www.gutenberg.org/browse/scores/top Top 100 - Project Gutenberg]<br />
*[http://www.randomhouse.com/modernlibrary/100talkingpoints.html The Modern Library - 100 Best - Talking Points]<br />
*[http://www.randomhouse.com/modernlibrary/100bestnonfiction.html The Modern Library - 100 Best - Nonfiction]<br />
*[http://www.randomhouse.com/modernlibrary/100bestnovels.html The Modern Library - 100 Best - Novels]<br />
*[http://www.nytimes.com/pages/books/bestseller/ NY Times Best-Seller Lists]<br />
*[http://www.bookmooch.com/ BookMooch] &mdash; a free book trade and exchange community<br />
*[http://www.bookcrossing.com/ BookCrossing] &mdash; a free book club<br />
*[http://www.nndb.com/ Notable Names Database] (NNDB) &mdash; an online database of biographical details of notable people.<br />
*[http://wikisummaries.org/Main_Page WikiSummaries] &mdash; provides free book summaries<br />
*[http://www.fullbooks.com/ fullbooks.com]<br />
*[http://www.themodernword.com/eco/eco_writings.html Umberto Eco: His Own Writings]<br />
*[http://www.ulib.org/ UDL: Universal Digital Library] &mdash; has over 1.5 million books digitised.<br />
*[[wikipedia:List of historical novels]]<br />
<br />
{{stub}}</div>Christophhttp://wiki.christophchamp.com/index.php?title=Kubernetes&diff=8279Kubernetes2023-08-17T16:46:56Z<p>Christoph: /* Release history */</p>
<hr />
<div>'''Kubernetes''' (also known by its numeronym '''k8s''') is an open source container cluster manager. Kubernetes' primary goal is to provide a platform for automating deployment, scaling, and operations of application containers across a cluster of hosts. Kubernetes was released by Google on July 2015.<br />
<br />
* Get the latest stable release of k8s with:<br />
$ curl -sSL <nowiki>https://dl.k8s.io/release/stable.txt</nowiki><br />
<br />
==Release history==<br />
<br />
NOTE: There is no such thing as Kubernetes Long-Term-Support (LTS). There is a new "minor" release ''roughly'' every 3 months (note: changed to ''roughly'' every 4 months in 2020).<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="3" bgcolor="#EFEFEF" | '''Kubernetes release history'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Release<br />
!Date<br />
!Cadence (days)<br />
|- align="left"<br />
|1.0 || 2015-07-10 ||align="right"|<br />
|--bgcolor="#eeeeee"<br />
|1.1 || 2015-11-09 ||align="right"| 122<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.2.md 1.2] || 2016-03-16 ||align="right"| 128<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.3.md 1.3] || 2016-07-01 ||align="right"| 107<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.4.md 1.4] || 2016-09-26 ||align="right"| 87<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.5.md 1.5] || 2016-12-12 ||align="right"| 77<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.6.md 1.6] || 2017-03-28 ||align="right"| 106<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.7.md 1.7] || 2017-06-30 ||align="right"| 94<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.8.md 1.8] || 2017-09-28 ||align="right"| 90<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.9.md 1.9] || 2017-12-15 ||align="right"| 78<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md 1.10] || 2018-03-26 ||align="right"| 101<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md 1.11] || 2018-06-27 ||align="right"| 93<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.12.md 1.12] || 2018-09-27 ||align="right"| 92<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.13.md 1.13] || 2018-12-03 ||align="right"| 67<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.14.md 1.14] || 2019-03-25 ||align="right"| 112<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md 1.15] || 2019-06-17 ||align="right"| 84<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md 1.16] || 2019-09-18 ||align="right"| 93<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md 1.17] || 2019-12-09 ||align="right"| 82<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md 1.18] || 2020-03-25 ||align="right"| 107<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md 1.19] || 2020-08-26 ||align="right"| 154<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md 1.20] || 2020-12-08 ||align="right"| 104<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md 1.21] || 2021-04-08 ||align="right"| 121<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md 1.22] || 2021-08-04 ||align="right"| 118<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md 1.23] || 2021-12-07 ||align="right"| 125<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md 1.24] || 2022-05-03 ||align="right"| 147<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md 1.25] || 2022-08-23 ||align="right"| 112<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md 1.26] || 2023-01-18 ||align="right"| 148<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md 1.27] || 2023-04-11 ||align="right"| 83<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md 1.28] || 2023-08-15 ||align="right"| 126<br />
|}<br />
</div><br />
<br clear="all"/><br />
See: [https://gravitational.com/blog/kubernetes-release-cycle The full-time job of keeping up with Kubernetes]<br />
<br />
==Providers and installers==<br />
<br />
* Vanilla Kubernetes<br />
* AWS:<br />
** Managed: EKS<br />
** Kops<br />
** Kube-AWS<br />
** Kismatic<br />
** Kubicorn<br />
** Stack Point Cloud<br />
* Google:<br />
** Managed: GKE<br />
** [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
** Stack Point Cloud<br />
** Typhoon<br />
* Azure AKS<br />
* Ubuntu UKS<br />
* VMware PKS<br />
* [[Rancher|Rancher RKE]]<br />
* CoreOS Tectonic<br />
<br />
==Design overview==<br />
Kubernetes is built through the definition of a set of components (building blocks or "primitives") which, when used collectively, provide a method for the deployment, maintenance, and scalability of container-based application clusters.<br />
<br />
These "primitives" are designed to be ''loosely coupled'' (i.e., where little to no knowledge of the other component definitions is needed to use) as well as easily extensible through an API. Both the internal components of Kubernetes as well as the extensions and containers make use of this API.<br />
<br />
==Components==<br />
The building blocks of Kubernetes are the following (note that these are also referred to as Kubernetes "Objects" or "API Primitives"):<br />
<br />
;Cluster : A cluster is a set of machines (physical or virtual) on which your applications are managed and run. All machines are managed as a cluster (or set of clusters, depending on the topology used).<br />
;Nodes (minions) : You can think of these as "container clients". These are the individual hosts (physical or virtual) that Docker is installed on and hosts the various containers within your managed cluster.<br />
: Each node will run etcd (a key-pair management and communication service, used by Kubernetes for exchanging messages and reporting on cluster status) as well as the Kubernetes Proxy.<br />
;Pods : A pod consists of one or more containers. Those containers are guaranteed (by the cluster controller) to be located on the same host machine (aka "co-located") in order to facilitate sharing of resources. For an example, it makes sense to have database processes and data containers as close as possible. In fact, they really should be in the same pod.<br />
: Pods "work together", as in a multi-tiered application configuration. Each set of pods that define and implement a service (e.g., MySQL or Apache) are defined by the label selector (see below).<br />
: Pods are assigned unique IPs within each cluster. These allow an application to use ports without having to worry about conflicting port utilization.<br />
: Pods can contain definitions of disk volumes or shares, and then provide access from those to all the members (containers) within the pod.<br />
: Finally, pod management is done through the API or delegated to a controller.<br />
;Labels : Clients can attach key-value pairs to any object in the system (e.g., Pods or Nodes). These become the labels that identify them in the configuration and management of them. The key-value pairs can be used to filter, organize, and perform mass operations on a set of resources.<br />
;Selectors : Label Selectors represent queries that are made against those labels. They resolve to the corresponding matching objects. A Selector expression matches labels to filter certain resources. For example, you may want to search for all pods that belong to a certain service, or find all containers that have a specific tier Label value as "database". Labels and Selectors are inherently two sides of the same coin. You can use Labels to classify resources and use Selectors to find them and use them for certain actions.<br />
: These two items are the primary way that grouping is done in Kubernetes and determine which components that a given operation applies to when indicated.<br />
;Controllers : These are used in the management of your cluster. Controllers are the mechanism by which your desired configuration state is enforced.<br />
: Controllers manage a set of pods and, depending on the desired configuration state, may engage other controllers to handle replication and scaling (Replication Controller) of X number of containers and pods across the cluster. It is also responsible for replacing any container in a pod that fails (based on the desired state of the cluster).<br />
: Replication Controllers (RC) are a subset of Controllers and are an abstraction used to manage pod lifecycles. One of the key uses of RCs is to maintain a certain number of running Pods (e.g., for scaling or ensuring that at least one Pod is running at all times, etc.). It is considered a "best practice" to use RCs to define Pod lifecycles, rather than creating Pods directly.<br />
: Other controllers that can be engaged include a ''DaemonSet Controller'' (enforces a 1-to-1 ratio of pods to Worker Nodes) and a ''Job Controller'' (that runs pods to "completion", such as in batch jobs).<br />
: Each set of pods any controller manages, is determined by the label selectors that are part of its definition.<br />
;Replica Sets: These define how many replicas of each Pod will be running. They also monitor and ensure the required number of Pods are running, replacing Pods that die. Replica Sets can act as replacements for Replication Controllers.<br />
;Services : A Service is an abstraction on top of Pods, which provides a single IP address and DNS name by which the Pods can be accessed. This load balancing configuration is much easier to manage and helps scale Pods seamlessly.<br />
: Kubernetes can then provide service discovery and handle routing with the static IP for each pod as well as load balancing (round-robin based) connections to that service among the pods that match the label selector indicated.<br />
: By default, although a service is only exposed inside a cluster, it can also be exposed outside a cluster, as needed.<br />
;Volumes : A Volume is a directory with data, which is accessible to a container. The volume co-terminates with the Pods that encloses it.<br />
;Name : A name by which a resource is identified.<br />
;Namespace : A Namespace provides additional qualification to a resource name. This is especially helpful when multiple teams/projects are using the same cluster and there is a potential for name collision. You can think of a Namespace as a virtual wall between multiple clusters.<br />
;Annotations : An Annotation is a Label, but with much larger data capacity. Typically, this data is not readable by humans and is not easy to filter through. Annotation is useful only for storing data that may not be searched, but is required by the resource (e.g., storing strong keys, etc.).<br />
;Control Pane<br />
;API<br />
<br />
===Pods===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/ Pod]'' is the smallest and simplest Kubernetes object. It is the unit of deployment in Kubernetes, which represents a single instance of the application. A Pod is a logical collection of one or more containers, which:<br />
<br />
* are scheduled together on the same host;<br />
* share the same network namespace; and<br />
* mount the same external storage (Volumes).<br />
<br />
Pods are ephemeral in nature, and they do not have the capability to self-heal by themselves. That is why we use them with controllers, which can handle a Pod's replication, fault tolerance, self-heal, etc. Examples of controllers are ''Deployments'', ''ReplicaSets'', ''ReplicationControllers'', etc. We attach the Pod's specification to other objects using Pod Templates (see below).<br />
<br />
===Labels===<br />
Labels are key-value pairs that can be attached to any Kubernetes object (e.g. ''Pods''). Labels are used to organize and select a subset of objects, based on the requirements in place. Many objects can have the same label(s). Labels do not provide uniqueness to objects. <br />
<br />
===Label Selectors===<br />
With Label Selectors, we can select a subset of objects. Kubernetes supports two types of Selectors:<br />
<br />
;Equality-Based Selectors : Equality-Based Selectors allow filtering of objects based on label keys and values. With this type of Selector, we can use the <code>=</code>, <code>==</code>, or <code>!=</code> operators. For example, with <code>env==dev</code>, we are selecting the objects where the "<code>env</code>" label is set to "<code>dev</code>".<br />
;Set-Based Selectors : Set-Based Selectors allow filtering of objects based on a set of values. With this type of Selector, we can use the <code>in</code>, <code>notin</code>, and <code>exist</code> operators. For example, with <code>env in (dev,qa)</code>, we are selecting objects where the "<code>env</code>" label is set to "<code>dev</code>" or "<code>qa</code>".<br />
<br />
===Replication Controllers===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/ ReplicationController]'' (rc) is a controller that is part of the Master Node's Controller Manager. It makes sure the specified number of replicas for a Pod is running at any given point in time. If there are more Pods than the desired count, the ReplicationController would kill the extra Pods, and, if there are less Pods, then the ReplicationController would create more Pods to match the desired count. Generally, we do not deploy a Pod independently, as it would not be able to re-start itself if something goes wrong. We always use controllers like ReplicationController to create and manage Pods.<br />
<br />
===Replica Sets===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ ReplicaSet]'' (rs) is the next-generation ReplicationController. ReplicaSets support both equality- and set-based Selectors, whereas ReplicationControllers only support equality-based Selectors. As of January 2018, this is the only difference.<br />
<br />
As an example, say you create a ReplicaSet where you defined a "desired replicas = 3" (and set "<code>current==desired</code>"), any time "<code>current!=desired</code>" (i.e., one of the Pods dies) the ReplicaSet will detect that the current state is no longer matching the desired state. So, in our given scenario, the ReplicaSet will create one more Pod, thus ensuring that the current state matches the desired state.<br />
<br />
ReplicaSets can be used independently, but they are mostly used by Deployments to orchestrate the Pod creation, deletion, and updates. A Deployment automatically creates the ReplicaSets, and we do not have to worry about managing them.<br />
<br />
===Deployments===<br />
''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' objects provide declarative updates to Pods and ReplicaSets. The DeploymentController is part of the Master Node's Controller Manager, and it makes sure that the current state always matches the desired state.<br />
<br />
As an example, let's say we have a Deployment which creates a "ReplicaSet A". ReplicaSet A then creates 3 Pods. In each Pod, one of the containers uses the <code>nginx:1.7.9</code> image.<br />
<br />
Now, in the Deployment, we change the Pod's template and we update the image for the Nginx container from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>. As we have modified the Pod's template, a new "ReplicaSet B" gets created. This process is referred to as a "Deployment rollout". (A rollout is only triggered when we update the Pod's template for a deployment. Operations like scaling the deployment do not trigger the deployment.) Once ReplicaSet B is ready, the Deployment starts pointing to it.<br />
<br />
On top of ReplicaSets, Deployments provide features like Deployment recording, with which, if something goes wrong, we can rollback to a previously known state.<br />
<br />
===Namespaces===<br />
If we have numerous users whom we would like to organize into teams/projects, we can partition the Kubernetes cluster into sub-clusters using ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces]''. The names of the resources/objects created inside a Namespace are unique, but not across Namespaces.<br />
<br />
To list all the Namespaces, we can run the following command:<br />
$ kubectl get namespaces<br />
NAME STATUS AGE<br />
default Active 2h<br />
kube-public Active 2h<br />
kube-system Active 2h<br />
<br />
Generally, Kubernetes creates two default namespaces: <code>kube-system</code> and <code>default</code>. The <code>kube-system</code> namespace contains the objects created by the Kubernetes system. The <code>default</code> namespace contains the objects which belong to any other namespace. By default, we connect to the <code>default</code> Namespace. <code>kube-public</code> is a special namespace, which is readable by all users and used for special purposes, like bootstrapping a cluster. <br />
<br />
Using ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ Resource Quotas]'', we can divide the cluster resources within Namespaces.<br />
<br />
===Component services===<br />
The component services running on a standard master/worker node(s) Kubernetes setup are as follows:<br />
* Kubernetes Master node(s)<br />
*; kube-apiserver : Exposes Kubernetes APIs<br />
*; kube-controller-manager : Runs controllers to handle nodes, endpoints, etc.<br />
*; kube-scheduler : Watches for new pods and assigns them nodes<br />
*; etcd : Distributed key-value store<br />
*; DNS : [optional] DNS for Kubernetes services<br />
* Worker node(s)<br />
*; kubelet : Manages pods on a node, volumes, secrets, creating new containers, health checks, etc.<br />
*; kube-proxy : Maintains network rules, port forwarding, etc.<br />
<br />
==Setup a Kubernetes cluster==<br />
<br />
<div style="margin: 10px; padding: 5px; border: 2px solid red;">'''IMPORTANT''': The following is how to setup Kubernetes 1.2 that is, as of January 2018, a very old version. I will update this article with how to setup k8s using a much newer version (v1.9) when I have time.<br />
</div><br />
<br />
In this section, I will show you how to setup a Kubernetes cluster with etcd and Docker. The cluster will consist of 1 master node and 3 worker nodes.<br />
<br />
===Setup VMs===<br />
<br />
For this demo, I will be creating 4 VMs via [[Vagrant]] (with VirtualBox).<br />
<br />
* Create Vagrant demo environment:<br />
$ mkdir $HOME/dev/kubernetes && cd $_<br />
<br />
* Create Vagrantfile with the following contents:<br />
<pre><br />
# -*- mode: ruby -*-<br />
# vi: set ft=ruby :<br />
<br />
require 'yaml'<br />
VAGRANTFILE_API_VERSION = "2"<br />
<br />
$common_script = <<COMMON_SCRIPT<br />
# Set verbose<br />
set -v<br />
# Set exit on error<br />
set -e<br />
echo -e "$(date) [INFO] Starting modified Vagrant..."<br />
sudo yum update -y<br />
# Timestamp provision<br />
date > /etc/vagrant_provisioned_at<br />
COMMON_SCRIPT<br />
<br />
unless defined? CONFIG<br />
configuration_file = File.join(File.dirname(__FILE__), 'vagrant_config.yml')<br />
CONFIG = YAML.load(File.open(configuration_file, File::RDONLY).read)<br />
end<br />
<br />
CONFIG['box'] = {} unless CONFIG.key?('box')<br />
<br />
def modifyvm_network(node)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--nicpromisc1", "allow-all"]<br />
#vbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]<br />
vbox.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]<br />
end<br />
end<br />
<br />
def modifyvm_resources(node, memory, cpus)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--memory", memory]<br />
vbox.customize ["modifyvm", :id, "--cpus", cpus]<br />
end<br />
end<br />
<br />
## START: Actual Vagrant process<br />
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|<br />
<br />
config.vm.box = CONFIG['box']['name']<br />
<br />
# Uncomment the following line if you wish to be able to pass files from<br />
# your local filesystem directly into the vagrant VM:<br />
#config.vm.synced_folder "data", "/vagrant"<br />
<br />
## VM: k8s master #############################################################<br />
config.vm.define "master" do |node|<br />
node.vm.hostname = "k8s.master.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
#node.vm.network "forwarded_port", guest: 80, host: 8080<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['master']<br />
<br />
# Uncomment the following if you wish to define CPU/memory:<br />
#node.vm.provider "virtualbox" do |vbox|<br />
# vbox.customize ["modifyvm", :id, "--memory", "4096"]<br />
# vbox.customize ["modifyvm", :id, "--cpus", "2"]<br />
#end<br />
#modifyvm_resources(node, "4096", "2")<br />
end<br />
## VM: k8s minion1 ############################################################<br />
config.vm.define "minion1" do |node|<br />
node.vm.hostname = "k8s.minion1.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion1']<br />
end<br />
## VM: k8s minion2 ############################################################<br />
config.vm.define "minion2" do |node|<br />
node.vm.hostname = "k8s.minion2.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion2']<br />
end<br />
## VM: k8s minion3 ############################################################<br />
config.vm.define "minion3" do |node|<br />
node.vm.hostname = "k8s.minion3.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion3']<br />
end<br />
###############################################################################<br />
<br />
end<br />
</pre><br />
<br />
The above Vagrantfile uses the following configuration file:<br />
$ cat vagrant_config.yml<br />
<pre><br />
---<br />
box:<br />
name: centos/7<br />
storage_controller: 'SATA Controller'<br />
debug: false<br />
development: false<br />
network:<br />
dns1: 8.8.8.8<br />
dns2: 8.8.4.4<br />
internal:<br />
network: 192.168.200.0/24<br />
external:<br />
start: 192.168.100.100<br />
end: 192.168.100.200<br />
network: 192.168.100.0/24<br />
bridge: wlan0<br />
netmask: 255.255.255.0<br />
broadcast: 192.168.100.255<br />
host_groups:<br />
master: 192.168.200.100<br />
minion1: 192.168.200.101<br />
minion2: 192.168.200.102<br />
minion3: 192.168.200.103<br />
</pre><br />
<br />
* In the Vagrant Kubernetes directory (i.e., <code>$HOME/dev/kubernetes</code>), run the following command:<br />
$ vagrant up<br />
<br />
===Setup hosts===<br />
''Note: Run the following commands/steps on all hosts (master and minions).''<br />
<br />
* Log into the k8s master host:<br />
$ vagrant ssh master<br />
<br />
* Kubernetes cluster<br />
$ cat << EOF >> /etc/hosts<br />
192.168.200.100 k8s.master.dev<br />
192.168.200.101 k8s.minion1.dev<br />
192.168.200.102 k8s.minion2.dev<br />
192.168.200.103 k8s.minion3.dev<br />
EOF<br />
<br />
* Install, enable, and start NTP:<br />
$ yum install -y ntp<br />
$ systemctl enable ntpd && systemctl start ntpd<br />
$ timedatectl<br />
<br />
* Disable any [[iptables|firewall rules]] (for now; we will add the rules back later):<br />
$ systemctl stop firewalld && systemctl disable firewalld<br />
$ systemctl stop iptables<br />
<br />
* Disable [[SELinux]] (for now; we will turn it on again later):<br />
$ setenforce 0<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/sysconfig/selinux<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
$ sestatus<br />
<br />
* Add the Docker repo and update yum:<br />
$ cat << EOF > /etc/yum.repos.d/virt7-docker-common-release.repo<br />
[virt7-docker-common-release]<br />
name=virr7-docker-common-release<br />
baseurl=<nowiki>http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/</nowiki><br />
gpgcheck=0<br />
EOF<br />
$ yum update<br />
<br />
* Install Docker, Kubernetes, and etcd:<br />
$ yum install -y --enablerepo=virt7-docker-common-release kubernetes docker etcd<br />
<br />
===Install and configure master controller===<br />
''Note: Run the following commands on only the master host.''<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/etcd/etcd.conf</code> and add (or make changes to) the following lines:<br />
[member]<br />
ETCD_LISTEN_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
[cluster]<br />
ETCD_ADVERTISE_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/apiserver</code> and add (or make changes to) the following lines:<br />
<pre><br />
# The address on the local server to listen to.<br />
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"<br />
KUBE_API_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port on the local server to listen on.<br />
KUBE_API_PORT="--port=8080"<br />
<br />
# Port minions listen on<br />
KUBELET_PORT="--kubelet-port=10250"<br />
<br />
# Comma separated list of nodes in the etcd cluster<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://127.0.0.1:2379</nowiki>"<br />
<br />
# Address range to use for services<br />
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"<br />
<br />
# default admission control policies<br />
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"<br />
<br />
# Add your own!<br />
KUBE_API_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following etcd and Kubernetes services:<br />
<br />
$ for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE <br />
done<br />
<br />
* Check on the status of the above services (the following command should report 4 running services):<br />
$ systemctl status etcd kube-apiserver kube-controller-manager kube-scheduler | grep "(running)" | wc -l # => 4<br />
<br />
* Check on the status of the Kubernetes API server:<br />
$ kubectl cluster-info<br />
Kubernetes master is running at <nowiki>http://localhost:8080</nowiki><br />
$ curl <nowiki>http://localhost:8080/version</nowiki><br />
#~OR~<br />
$ curl <nowiki>http://k8s.master.dev:8080/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "2",<br />
"gitVersion": "v1.2.0",<br />
"gitCommit": "ec7364b6e3b155e78086018aa644057edbe196e5",<br />
"gitTreeState": "clean"<br />
}<br />
</pre><br />
<br />
* Get a list of Kubernetes API paths:<br />
$ curl <nowiki>http://k8s.master.dev:8080/paths</nowiki><br />
<pre><br />
{<br />
"paths": [<br />
"/api",<br />
"/api/v1",<br />
"/apis",<br />
"/apis/autoscaling",<br />
"/apis/autoscaling/v1",<br />
"/apis/batch",<br />
"/apis/batch/v1",<br />
"/apis/extensions",<br />
"/apis/extensions/v1beta1",<br />
"/healthz",<br />
"/healthz/ping",<br />
"/logs/",<br />
"/metrics",<br />
"/resetMetrics",<br />
"/swagger-ui/",<br />
"/swaggerapi/",<br />
"/ui/",<br />
"/version"<br />
]<br />
}<br />
</pre><br />
<br />
* List all available paths (key-value stores) known to ectd:<br />
$ etcdctl ls / --recursive<br />
<br />
The master controller in a Kubernetes cluster must have the following services running to function as the master host in the cluster:<br />
* ntpd<br />
* etcd<br />
* kube-controller-manager<br />
* kube-apiserver<br />
* kube-scheduler<br />
<br />
Note: The Docker daemon should not be running on the master host.<br />
<br />
===Install and configure the minions===<br />
''Note: Run the following commands/steps on all minion hosts.''<br />
<br />
* Log into the k8s minion hosts:<br />
$ vagrant ssh minion1 # do the same for minion2 and minion3<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ECTD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/kubelet</code> and add (or make changes to) the following lines:<br />
<pre><br />
###<br />
# kubernetes kubelet (minion) config<br />
<br />
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)<br />
KUBELET_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port for the info server to serve on<br />
KUBELET_PORT="--port=10250"<br />
<br />
# You may leave this blank to use the actual hostname<br />
KUBELET_HOSTNAME="--hostname-override=k8s.minion1.dev" # ***CHANGE TO CORRECT MINION HOSTNAME***<br />
<br />
# location of the api-server<br />
KUBELET_API_SERVER="--api-servers=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
<br />
# pod infrastructure container<br />
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"<br />
<br />
# Add your own!<br />
KUBELET_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following services:<br />
$ for SERVICE in kube-proxy kubelet docker; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE<br />
done<br />
<br />
* Test that Docker is running and can start containers:<br />
$ docker info<br />
$ docker pull hello-world<br />
$ docker run hello-world<br />
<br />
Each minion in a Kubernetes cluster must have the following services running to function as a member of the cluster (i.e., a "Ready" node):<br />
* ntpd<br />
* kubelet<br />
* kube-proxy<br />
* docker<br />
<br />
===Kubectl: Exploring our environment===<br />
''Note: Run all of the following commands on the master host.''<br />
<br />
* Get a list of nodes with <code>kubectl</code>:<br />
$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 20m<br />
k8s.minion2.dev Ready 12m<br />
k8s.minion3.dev Ready 12m<br />
</pre><br />
<br />
* Describe nodes with <code>kubectl</code>:<br />
<br />
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'<br />
$ kubectl get nodes -o jsonpath='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' | tr ';' "\n"<br />
<pre><br />
k8s.minion1.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion2.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion3.dev:OutOfDisk=False<br />
Ready=True<br />
</pre><br />
<br />
* Get the man page for <code>kubectl</code>:<br />
$ man kubectl-get<br />
<br />
==Working with our Kubernetes cluster==<br />
<br />
''Note: The following section will be working from within the Kubernetes cluster we created above.''<br />
<br />
===Create and deploy pod definitions===<br />
<br />
* Turn off nodes 1 and 2:<br />
minion{1,2}$ systemctl stop kubelet kube-proxy<br />
<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 1h<br />
k8s.minion2.dev NotReady 37m<br />
k8s.minion3.dev NotReady 39m<br />
</pre><br />
<br />
* Check for any k8s Pods (there should be none):<br />
master$ kubectl get pods<br />
<br />
* Create a builds directory for our Pods:<br />
master$ mkdir builds && cd $_<br />
<br />
* Create a Pod running Nginx inside a Docker container:<br />
<pre><br />
master$ kubectl create -f - <<EOF<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
* Check on Pod creation status:<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 0/1 ContainerCreating 0 2s<br />
</pre><br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 1/1 Running 0 3m<br />
</pre><br />
<br />
minion1$ docker ps<br />
<pre><br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
a718c6c0355d nginx:1.7.9 "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx.4580025_nginx_default_699e...<br />
</pre><br />
<br />
master$ kubectl describe pod nginx<br />
<br />
master$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1<br />
busybox$ wget -qO- 172.17.0.2<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete pod nginx<br />
<br />
* Port forwarding:<br />
master$ kubectl create -f nginx.yml # see above for YAML<br />
master$ kubectl port-forward nginx :80 &<br />
I1020 23:12:29.478742 23394 portforward.go:213] Forwarding from [::1]:40065 -> 80<br />
master$ curl -I localhost:40065<br />
<br />
===Tags, labels, and selectors===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-pod-label.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-pod-label.yml<br />
master$ kubectl get pods -l app=nginx<br />
master$ kubectl describe pods -l app=nginx<br />
<br />
* Add labels or overwrite existing ones:<br />
master$ kubectl label pods nginx new-label=mynginx<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=nginx<br />
master$ kubectl label pods nginx new-label=foo<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=foo<br />
<br />
===Deployments===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-prod.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-prod<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-prod<br />
spec:<br />
containers:<br />
- name: nginx-deployment-prod<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create --validate -f nginx-deployment-dev.yml<br />
master$ kubectl create --validate -f nginx-deployment-prod.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 5m<br />
nginx-deployment-prod-3051195443-hj9b1 1/1 Running 0 12m<br />
</pre><br />
<br />
master$ kubectl describe deployments -l app=nginx-deployment-dev<br />
<pre><br />
Name: nginx-deployment-dev<br />
Namespace: default<br />
CreationTimestamp: Thu, 20 Oct 2016 23:48:46 +0000<br />
Labels: app=nginx-deployment-dev<br />
Selector: app=nginx-deployment-dev<br />
Replicas: 1 updated | 1 total | 1 available | 0 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-dev-2568522567 (1/1 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment-prod 1 1 1 1 44s<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev-update.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.8 # ***CHANGED***<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl apply -f nginx-deployment-dev-update.yml<br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 0/1 ContainerCreating 0 27s<br />
</pre><br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 6m<br />
</pre><br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment nginx-deployment-dev<br />
master$ kubectl delete deployment nginx-deployment-prod<br />
<br />
===Multi-Pod (container) replication controller===<br />
<br />
* Start the other two nodes (the ones we previously stopped):<br />
minion2$ systemctl start kubelet kube-proxy<br />
minion3$ systemctl start kubelet kube-proxy<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 2h<br />
k8s.minion2.dev Ready 2h<br />
k8s.minion3.dev Ready 2h<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-multi-node.yml<br />
---<br />
apiVersion: v1<br />
kind: ReplicationController<br />
metadata:<br />
name: nginx-www<br />
spec:<br />
replicas: 3<br />
selector:<br />
app: nginx<br />
template:<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-multi-node.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 0/1 ContainerCreating 0 10s<br />
nginx-www-416ct 0/1 ContainerCreating 0 10s<br />
nginx-www-ax41w 0/1 ContainerCreating 0 10s<br />
</pre><br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 1m<br />
nginx-www-416ct 1/1 Running 0 1m<br />
nginx-www-ax41w 1/1 Running 0 1m<br />
</pre><br />
<br />
master$ kubectl describe pods | awk '/^Node/{print $2}'<br />
<pre><br />
k8s.minion2.dev/192.168.200.102<br />
k8s.minion1.dev/192.168.200.101<br />
k8s.minion3.dev/192.168.200.103<br />
</pre><br />
<br />
minion1$ docker ps # 1 nginx container running<br />
minion2$ docker ps # 1 nginx container running<br />
minion3$ docker ps # 1 nginx container running<br />
minion3$ docker ps --format "<nowiki>{{.Image}}</nowiki>"<br />
<pre><br />
nginx<br />
gcr.io/google_containers/pause:2.0<br />
</pre><br />
<br />
master$ kubectl describe replicationcontroller<br />
<pre><br />
Name: nginx-www<br />
Namespace: default<br />
Image(s): nginx<br />
Selector: app=nginx<br />
Labels: app=nginx<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
...<br />
</pre><br />
<br />
* Attempt to delete one of the three pods:<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 11m<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
master$ kubectl delete pod nginx-www-2evxu<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-3cck4 1/1 Running 0 12s<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
<br />
A new pod (<code>nginx-www-3cck4</code>) automatically started up. This is because the expected state, as defined in our YAML file, is for there to be 3 pods running at all times. Thus, if one or more of the pods were to go down, a new pod (or pods) will automatically start up to bring the state back to the expected state.<br />
<br />
* To force-delete all pods:<br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Create and deploy service definitions===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-service.yml<br />
---<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: nginx-service<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
</pre><br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
</pre><br />
master$ kubectl create -f nginx-service.yml<br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
nginx-service 10.254.110.127 <none> 8000/TCP 10s<br />
</pre><br />
<br />
master$ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i<br />
busybox$ wget -qO- 10.254.110.127:8000 # works<br />
<br />
* Cleanup<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete service nginx-service<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-jh2e9 1/1 Running 0 13m<br />
nginx-www-jir2g 1/1 Running 0 13m<br />
nginx-www-w91uw 1/1 Running 0 13m<br />
</pre><br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Creating temporary Pods at the CLI===<br />
<br />
* Make sure we have no Pods running:<br />
master$ kubectl get pods<br />
<br />
* Create temporary deployment pod:<br />
master$ kubectl run mysample --image=foobar/apache<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
mysample-1424711890-fhtxb 0/1 ContainerCreating 0 1s<br />
</pre><br />
master$ kubectl get deployment <br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
mysample 1 1 1 0 7s<br />
</pre><br />
<br />
* Create a temporary deployment pod (where we know it will fail):<br />
master$ kubectl run myexample --image=christophchamp/ubuntu_sysadmin<br />
master$ kubectl -o wide get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myexample-3534121234-mpr35 0/1 CrashLoopBackOff 12 39m k8s.minion3.dev<br />
mysample-2812764540-74c5h 1/1 Running 0 41m k8s.minion2.dev<br />
</pre><br />
<br />
* Check on why the "myexample" pod is in status "CrashLoopBackOff":<br />
master$ kubectl describe pods/myexample-3534121234-mpr35<br />
master$ kubectl describe deployments/mysample<br />
master$ kubectl describe pods/mysample-2812764540-74c5h | awk '/^Node/{print $2}'<br />
k8s.minion2.dev/192.168.200.102<br />
<br />
master$ kubectl delete deployment mysample<br />
<br />
* Run multiple replicas of the same pod:<br />
master$ kubectl run myreplicas --image=latest123/apache --replicas=2 --labels=app=myapache,version=1.0.0<br />
master$ kubectl describe deployment myreplicas <br />
<pre><br />
Name: myreplicas<br />
Namespace: default<br />
CreationTimestamp: Fri, 21 Oct 2016 19:10:30 +0000<br />
Labels: app=myapache,version=1.0.0<br />
Selector: app=myapache,version=1.0.0<br />
Replicas: 2 updated | 2 total | 1 available | 1 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: myreplicas-2209834598 (2/2 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myreplicas-2209834598-5iyer 1/1 Running 0 1m k8s.minion1.dev<br />
myreplicas-2209834598-cslst 1/1 Running 0 1m k8s.minion2.dev<br />
</pre><br />
<br />
master$ kubectl describe pods -l version=1.0.0<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myreplicas<br />
<br />
===Interacting with Pod containers===<br />
<br />
* Create example Apache pod definition file:<br />
<pre><br />
master$ cat << EOF > apache.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: apache<br />
spec:<br />
containers:<br />
- name: apache<br />
image: latest123/apache<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl create -f apache.yml<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
<br />
* Test pod and make some basic configuration changes:<br />
master$ kubectl exec apache date<br />
master$ kubectl exec mypod -i -t -- cat /var/www/html/index.html # default apache HTML<br />
master$ kubectl exec apache -i -t -- /bin/bash<br />
container$ export TERM=xterm<br />
container$ echo "xtof test" > /var/www/html/index.html<br />
minion3$ curl 172.17.0.2<br />
xtof test<br />
container$ exit<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
Pod/container is still running even after we exited (as expected).<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Logs===<br />
<br />
* Start our example Apache pod to use for checking Kubernetes logging features:<br />
master$ kubectl create -f apache.yml <br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
apache 1/1 Running 0 9s<br />
</pre><br />
master$ kubectl logs apache<br />
<pre><br />
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message<br />
</pre><br />
master$ kubectl logs --tail=10 apache<br />
master$ kubectl logs --since=24h apache # or 10s, 2m, etc.<br />
master$ kubectl logs -f apache # follow the logs<br />
master$ kubectl logs -f -c apache apache # where -c is the container ID<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Autoscaling and scaling Pods===<br />
<br />
master$ kubectl run myautoscale --image=latest123/apache --port=80 --labels=app=myautoscale<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 47s k8s.minion3.dev<br />
</pre><br />
<br />
* Create an autoscale definition:<br />
master$ kubectl autoscale deployment myautoscale --min=2 --max=6 --cpu-percent=80<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 2 2 2 2 4m<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 3m k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4s k8s.minion2.dev<br />
</pre><br />
<br />
* Scale up an already autoscaled deployment:<br />
master$ kubectl scale --current-replicas=2 --replicas=4 deployment/myautoscale<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 4 4 4 4 8m<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-2rxhp 1/1 Running 0 8s k8s.minion1.dev<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 7m k8s.minion3.dev<br />
myautoscale-3243017378-ozxs8 1/1 Running 0 8s k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4m k8s.minion2.dev<br />
</pre><br />
<br />
* Scale down:<br />
master$ kubectl scale --current-replicas=4 --replicas=2 deployment/myautoscale<br />
<br />
Note: You can not scale down past the original minimum number of pods/containers specified in the original autoscale deployment (i.e., min=2 in our example).<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myautoscale<br />
<br />
===Failure and recovery===<br />
<br />
master$ kubectl run myrecovery --image=latest123/apache --port=80 --replicas=2 --labels=app=myrecovery<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myrecovery 2 2 2 2 6s<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-5xu8f 1/1 Running 0 12s k8s.minion1.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 12s k8s.minion2.dev<br />
</pre><br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the minions/nodes (so we have a total of 2 nodes online):<br />
minion1$ systemctl stop docker kubelet kube-proxy<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-qyi04 1/1 Running 0 7m k8s.minion3.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 14m k8s.minion2.dev<br />
</pre><br />
Pod switch from minion1 to minion3.<br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the remaining online minions/nodes (so we have a total of 1 node online):<br />
minion2$ systemctl stop docker kubelet kube-proxy<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-b5tim 1/1 Running 0 2m k8s.minion3.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 17m k8s.minion3.dev<br />
</pre><br />
Both Pods are now running on minion3, the only available node.<br />
<br />
* Start up Kubernetes- and Docker-related services again on minion1 and delete one of the Pods:<br />
minion1$ systemctl start docker kubelet kube-proxy<br />
master$ kubectl delete pod myrecovery-563119102-b5tim<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-8unzg 1/1 Running 0 1m k8s.minion1.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 20m k8s.minion3.dev<br />
</pre><br />
Pods are now running on separate nodes.<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployments/myrecovery<br />
<br />
==Minikube==<br />
[https://github.com/kubernetes/minikube Minikube] is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.<br />
<br />
* Install Minikube:<br />
$ curl -Lo minikube <nowiki>https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64</nowiki> \<br />
&& chmod +x minikube && sudo mv minikube /usr/local/bin/<br />
<br />
* Install kubectl<br />
$ curl -Lo kubectl <nowiki>https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl</nowiki> \<br />
&& chmod +x kubectl && sudo mv kubectl /usr/local/bin/<br />
<br />
* Test install<br />
$ minikube start<br />
#~OR~<br />
$ minikube start --memory 4096 # give it 4GB of RAM<br />
$ minikube status<br />
$ minikube dashboard<br />
$ kubectl config view<br />
$ kubectl cluster-info<br />
<br />
NOTE: If you have an old version of minikube installed, you should probably do the following before upgrading to a much newer version:<br />
$ minikube delete --all --purge<br />
<br />
Get the details on the CLI options for kubectl [https://kubernetes.io/docs/reference/kubectl/overview/ here].<br />
<br />
Using the <code>`kubectl proxy`</code> command, kubectl will authenticate with the API Server on the Master Node and would make the dashboard available on <nowiki>http://localhost:8001/ui</nowiki>:<br />
<br />
$ kubectl proxy<br />
Starting to serve on 127.0.0.1:8001<br />
<br />
After running the above command, we can access the dashboard at <code><nowiki>http://127.0.0.1:8001/ui</nowiki></code>.<br />
<br />
Once the kubectl proxy is configured, we can send requests to localhost on the proxy port:<br />
<br />
$ curl <nowiki>http://localhost:8001/</nowiki><br />
$ curl <nowiki>http://localhost:8001/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "8",<br />
"gitVersion": "v1.8.0",<br />
"gitCommit": "0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4",<br />
"gitTreeState": "clean",<br />
"buildDate": "2017-11-29T22:43:34Z",<br />
"goVersion": "go1.9.1",<br />
"compiler": "gc",<br />
"platform": "linux/amd64"<br />
}<br />
</pre><br />
<br />
Without kubectl proxy configured, we can get the Bearer Token using kubectl, and then send it with the API request. A Bearer Token is an access token which is generated by the authentication server (the API server on the Master Node) and given back to the client. Using that token, the client can connect back to the Kubernetes API server without providing further authentication details, and then, access resources.<br />
<br />
* Get the k8s token:<br />
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | awk '/^default/{print $1}') | awk '/^token/{print $2}')<br />
<br />
* Get the k8s API server endpoint:<br />
$ APISERVER=$(kubectl config view | awk '/https/{print $2}')<br />
<br />
* Access the API Server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}<br />
<br />
===Using Minikube as a local Docker registry===<br />
<br />
Sometimes it is useful to have a local Docker registry for Kubernetes to pull images from. As the Minikube [https://github.com/kubernetes/minikube/blob/0c616a6b42b28a1aab8397f5a9061f8ebbd9f3d9/README.md#reusing-the-docker-daemon README] describes, you can reuse the Docker daemon running within Minikube with <code>eval $(minikube docker-env)</code> to build and pull images from.<br />
<br />
To use an image without uploading it to some external resgistry (e.g., Docker Hub), you can follow these steps:<br />
* Set the environment variables with <code>eval $(minikube docker-env)</code><br />
* Build the image with the Docker daemon of Minikube (e.g., <code>docker build -t my-image .</code>)<br />
* Set the image in the pod spec like the build tag (e.g., <code>my-image</code>)<br />
* Set the <code>imagePullPolicy</code> to <code>Never</code>, otherwise Kubernetes will try to download the image.<br />
<br />
Important note: You have to run <code>eval $(minikube docker-env)</code> on each terminal you want to use since it only sets the environment variables for the current shell session.<br />
<br />
===Working with our Minikube-based Kubernetes cluster===<br />
<br />
;Kubernetes Object Model<br />
<br />
Kubernetes has a very rich object model, with which it represents different persistent entities in the Kubernetes cluster. Those entities describe:<br />
<br />
* What containerized applications we are running and on which node<br />
* Application resource consumption<br />
* Different policies attached to applications, like restart/upgrade policies, fault tolerance, etc.<br />
<br />
With each object, we declare our intent or desired state using the '''spec''' field. The Kubernetes system manages the '''status''' field for objects, in which it records the actual state of the object. At any given point in time, the Kubernetes Control Plane tries to match the object's actual state to the object's desired state.<br />
<br />
Examples of Kubernetes objects are Pods, Deployments, ReplicaSets, etc.<br />
<br />
To create an object, we need to provide the '''spec''' field to the Kubernetes API Server. The '''spec''' field describes the desired state, along with some basic information, like the name. The API request to create the object must have the '''spec''' field, as well as other details, in a JSON format. Most often, we provide an object's definition in a YAML file, which is converted by kubectl in a JSON payload and sent to the API Server.<br />
<br />
Below is an example of a ''Deployment'' object:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
With the '''apiVersion''' field in the example above, we mention the API endpoint on the API Server which we want to connect to. Note that you can see what API version to use with the following call to the API server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}/apis/apps<br />
Use the '''preferredVersion''' for most cases.<br />
<br />
With the '''kind''' field, we mention the object type &mdash; in our case, we have '''Deployment'''. With the '''metadata''' field, we attach the basic information to objects, like the name. Notice that in the above we have two '''spec''' fields ('''spec''' and '''spec.template.spec'''). With '''spec''', we define the desired state of the deployment. In our example, we want to make sure that, at any point in time, at least 3 ''Pods'' are running, which are created using the Pod template defined in '''spec.template'''. In '''spec.template.spec''', we define the desired state of the Pod (here, our Pod would be created using nginx:1.7.9).<br />
<br />
Once the object is created, the Kubernetes system attaches the '''status''' field to the object.<br />
<br />
;Connecting users to Pods<br />
<br />
To access the application, a user/client needs to connect to the Pods. As Pods are ephemeral in nature, resources like IP addresses allocated to it cannot be static. Pods could die abruptly or be rescheduled based on existing requirements.<br />
<br />
As an example, consider a scenario in which a user/client is connecting to a Pod using its IP address. Unexpectedly, the Pod to which the user/client is connected dies and a new Pod is created by the controller. The new Pod will have a new IP address, which will not be known automatically to the user/client of the earlier Pod. To overcome this situation, Kubernetes provides a higher-level abstraction called ''[https://kubernetes.io/docs/concepts/services-networking/service/ Service]'', which logically groups Pods and a policy to access them. This grouping is achieved via Labels and Selectors (see above).<br />
<br />
So, for our example, we would use Selectors (e.g., "<code>app==frontend</code>" and "<code>app==db</code>") to group our Pods into two logical groups. We can assign a name to the logical grouping, referred to as a "service name". In our example, we have created two Services, <code>frontend-svc</code> and <code>db-svc</code>, and they have the "<code>app==frontend</code>" and the "<code>app==db</code>" Selectors, respectively.<br />
<br />
The following is an example of a Service object:<br />
<pre><br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: frontend-svc<br />
spec:<br />
selector:<br />
app: frontend<br />
ports:<br />
- protocol: TCP<br />
port: 80<br />
targetPort: 5000<br />
</pre><br />
<br />
in which we are creating a <code>frontend-svc</code> Service by selecting all the Pods that have the Label "<code>app</code>" equal to "<code>frontend</code>". By default, each Service also gets an IP address, which is routable only inside the cluster. In our case, we have 172.17.0.4 and 172.17.0.5 IP addresses for our <code>frontend-svc</code> and <code>db-svc</code> Services, respectively. The IP address attached to each Service is also known as the ClusterIP for that Service.<br />
<br />
+------------------------------------+<br />
| select: app==frontend | container (app:frontend; 10.0.1.3)<br />
| service=frontend-svc (172.17.0.4) |------> container (app:frontend; 10.0.1.4)<br />
+------------------------------------+ container (app:frontend; 10.0.1.5)<br />
^<br />
/<br />
/<br />
user/client<br />
\<br />
\<br />
v<br />
+------------------------------------+<br />
| select: app==db |------> container (app:db; 10.0.1.10)<br />
| service=db-svc (172.17.0.5) |<br />
+------------------------------------+<br />
<br />
The user/client now connects to a Service via ''its'' IP address, which forwards the traffic to one of the Pods attached to it. A Service does the load balancing while selecting the Pods for forwarding the data/traffic.<br />
<br />
While forwarding the traffic from the Service, we can select the target port on the Pod. In our example, for <code>frontend-svc</code>, we will receive requests from the user/client on port 80. We will then forward these requests to one of the attached Pods on port 5000. If the target port is not defined explicitly, then traffic will be forwarded to Pods on the port on which the Service receives traffic.<br />
<br />
A tuple of Pods, IP addresses, along with the <code>targetPort</code> is referred to as a ''Service Endpoint''. In our case, <code>frontend-svc</code> has 3 Endpoints: <code>10.0.1.3:5000</code>, <code>10.0.1.4:5000</code>, and <code>10.0.1.5:5000</code>.<br />
<br />
===kube-proxy===<br />
All of the Worker Nodes run a daemon called kube-proxy, which watches the API Server on the Master Node for the addition and removal of Services and endpoints. For each new Service, on each node, kube-proxy configures the IPtables rules to capture the traffic for its ClusterIP and forwards it to one of the endpoints. When the Service is removed, kube-proxy removes the IPtables rules on all nodes as well.<br />
<br />
===Service discovery===<br />
As Services are the primary mode of communication in Kubernetes, we need a way to discover them at runtime. Kubernetes supports two methods of discovering a Service:<br />
<br />
;Environment Variables : As soon as the Pod starts on any Worker Node, the kubelet daemon running on that node adds a set of environment variables in the Pod for all active Services. For example, if we have an active Service called <code>redis-master</code>, which exposes port 6379, and its ClusterIP is 172.17.0.6, then, on a newly created Pod, we can see the following environment variables:<br />
<br />
REDIS_MASTER_SERVICE_HOST=172.17.0.6<br />
REDIS_MASTER_SERVICE_PORT=6379<br />
REDIS_MASTER_PORT=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp<br />
REDIS_MASTER_PORT_6379_TCP_PORT=6379<br />
REDIS_MASTER_PORT_6379_TCP_ADDR=172.17.0.6<br />
<br />
With this solution, we need to be careful while ordering our Services, as the Pods will not have the environment variables set for Services which are created after the Pods are created.<br />
<br />
;DNS : Kubernetes has an add-on for DNS, which creates a DNS record for each Service and its format is like <code>my-svc.my-namespace.svc.cluster.local</code>. Services within the same namespace can reach other services with just their name. For example, if we add a Service <code>redis-master</code> in the <code>my-ns</code> Namespace, then all the Pods in the same Namespace can reach to the redis Service just by using its name, <code>redis-master</code>. Pods from other Namespaces can reach the Service by adding the respective Namespace as a suffix, like <code>redis-master.my-ns</code>.<br />
: This is the most common and highly recommended solution. For example, in the previous section's image, we have seen that an internal DNS is configured, which maps our services <code>frontend-svc</code> and <code>db-svc</code> to 172.17.0.4 and 172.17.0.5, respectively.<br />
<br />
===Service Type===<br />
While defining a Service, we can also choose its access scope. We can decide whether the Service:<br />
<br />
* is only accessible within the cluster;<br />
* is accessible from within the cluster and the external world; or<br />
* maps to an external entity which resides outside the cluster.<br />
<br />
Access scope is decided by ''ServiceType'', which can be mentioned when creating the Service.<br />
<br />
;ClusterIP : (the default ''ServiceType''.) A Service gets its Virtual IP address using the ClusterIP. That IP address is used for communicating with the Service and is accessible only within the cluster. <br />
<br />
;NodePort : With this ''ServiceType'', in addition to creating a ClusterIP, a port from the range '''30000-32767''' is mapped to the respective service from all the Worker Nodes. For example, if the mapped NodePort is 32233 for the service <code>frontend-svc</code>, then, if we connect to any Worker Node on port 32233, the node would redirect all the traffic to the assigned ClusterIP (172.17.0.4).<br />
: By default, while exposing a NodePort, a random port is automatically selected by the Kubernetes Master from the port range '''30000-32767'''. If we do not want to assign a dynamic port value for NodePort, then, while creating the Service, we can also give a port number from the earlier specific range.<br />
: The NodePort ServiceType is useful when we want to make our services accessible from the external world. The end-user connects to the Worker Nodes on the specified port, which forwards the traffic to the applications running inside the cluster. To access the application from the external world, administrators can configure a reverse proxy outside the Kubernetes cluster and map the specific endpoint to the respective port on the Worker Nodes.<br />
<br />
;LoadBalancer: With this ''ServiceType'', we have the following:<br />
:* NodePort and ClusterIP Services are automatically created, and the external load balancer will route to them;<br />
:* The Services are exposed at a static port on each Worker Node; and<br />
:* The Service is exposed externally using the underlying Cloud provider's load balancer feature.<br />
: The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS.<br />
<br />
;ExternalIP : A Service can be mapped to an ExternalIP address if it can route to one or more of the Worker Nodes. Traffic that is ingressed into the cluster with the ExternalIP (as destination IP) on the Service port, gets routed to one of the the Service endpoints. (Note that ExternalIPs are not managed by Kubernetes. The cluster administrator(s) must have configured the routing to map the ExternalIP address to one of the nodes.)<br />
<br />
;ExternalName : a special ''ServiceType'', which has no Selectors and does not define any endpoints. When accessed within the cluster, it returns a CNAME record of an externally configured service.<br />
: The primary use case of this ServiceType is to make externally configured services like <code>my-database.example.com</code> available inside the cluster, using just the name, like <code>my-database</code>, to other services inside the same Namespace.<br />
<br />
===Deploying a application===<br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
EOF<br />
</pre><br />
<br />
$ kubectl get service<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h<br />
web-service NodePort 10.104.107.132 <none> 80:32610/TCP 7m<br />
<br />
Note that "<code>32610</code>" port.<br />
<br />
* Get the IP address of your Minikube k8s cluster<br />
$ minikube ip<br />
192.168.99.100<br />
#~OR~<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
* Now, check that your web service is serving up a default Nginx website:<br />
$ curl -I <nowiki>http://192.168.99.100:32610</nowiki><br />
HTTP/1.1 200 OK<br />
Server: nginx/1.13.8<br />
Date: Thu, 11 Jan 2018 00:27:51 GMT<br />
Content-Type: text/html<br />
Content-Length: 612<br />
Last-Modified: Wed, 10 Jan 2018 04:10:03 GMT<br />
Connection: keep-alive<br />
ETag: "5a55921b-264"<br />
Accept-Ranges: bytes<br />
<br />
Looks good!<br />
<br />
Finally, destroy the webserver deployment:<br />
$ kubectl delete deployments webserver<br />
<br />
===Using Ingress with Minikube===<br />
<br />
* First check that the Ingress add-on is enabled:<br />
$ minikube addons list | grep ingress<br />
- ingress: disabled<br />
<br />
If it is not, enable it with:<br />
$ minikube addons enable ingress<br />
$ minikube addons list | grep ingress<br />
- ingress: enabled<br />
<br />
* Create an Echo Server Deployment:<br />
<pre><br />
$ cat << EOF >deploy-echoserver.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: echoserver<br />
template:<br />
metadata:<br />
labels:<br />
run: echoserver<br />
spec:<br />
containers:<br />
- image: gcr.io/google_containers/echoserver:1.4<br />
imagePullPolicy: IfNotPresent<br />
name: echoserver<br />
ports:<br />
- containerPort: 8080<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-echoserver.yml<br />
<br />
* Create the Cheddar cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-cheddar-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: cheddar-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:cheddar<br />
imagePullPolicy: IfNotPresent<br />
name: cheddar-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-stilton-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: stilton-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:stilton<br />
imagePullPolicy: IfNotPresent<br />
name: stilton-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create the Echo Server Service:<br />
<pre><br />
$ cat << EOF >svc-echoserver.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 31116<br />
port: 8080<br />
protocol: TCP<br />
targetPort: 8080<br />
selector:<br />
run: echoserver<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-echoserver.yml<br />
<br />
* Create the Cheddar cheese Service:<br />
<pre><br />
$ cat << EOF >svc-cheddar-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 32467<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: cheddar-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
</pre><br />
$ kubectl create --validate -f svc-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Service:<br />
<pre><br />
$ cat << EOF >svc-stilton-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 30197<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: stilton-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-stilton-cheese.yml<br />
<br />
* Create the Ingress for the above Services:<br />
<pre><br />
$ cat << EOF >ingress-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: ingress-cheese<br />
annotations:<br />
nginx.ingress.kubernetes.io/rewrite-target: /<br />
spec:<br />
backend:<br />
serviceName: default-http-backend<br />
servicePort: 80<br />
rules:<br />
- host: myminikube.info<br />
http:<br />
paths:<br />
- path: /<br />
backend:<br />
serviceName: echoserver<br />
servicePort: 8080<br />
- host: cheeses.all<br />
http:<br />
paths:<br />
- path: /stilton<br />
backend:<br />
serviceName: stilton-cheese<br />
servicePort: 80<br />
- path: /cheddar<br />
backend:<br />
serviceName: cheddar-cheese<br />
servicePort: 80<br />
</pre><br />
$ kubectl create --validate -f ingress-cheese.yml<br />
<br />
* Check that everything is up:<br />
<pre><br />
$ kubectl get all<br />
NAME READY STATUS RESTARTS AGE<br />
pod/cheddar-cheese-d6d6587c7-4bgcz 1/1 Running 0 12m<br />
pod/echoserver-55f97d5bff-pdv65 1/1 Running 0 12m<br />
pod/stilton-cheese-6d64cbc79-g7h4w 1/1 Running 0 12m<br />
<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
service/cheddar-cheese NodePort 10.109.238.92 <none> 80:32467/TCP 12m<br />
service/echoserver NodePort 10.98.60.194 <none> 8080:31116/TCP 12m<br />
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h<br />
service/stilton-cheese NodePort 10.108.175.207 <none> 80:30197/TCP 12m<br />
<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
deployment.apps/cheddar-cheese 1 1 1 1 12m<br />
deployment.apps/echoserver 1 1 1 1 12m<br />
deployment.apps/stilton-cheese 1 1 1 1 12m<br />
<br />
NAME DESIRED CURRENT READY AGE<br />
replicaset.apps/cheddar-cheese-d6d6587c7 1 1 1 12m<br />
replicaset.apps/echoserver-55f97d5bff 1 1 1 12m<br />
replicaset.apps/stilton-cheese-6d64cbc79 1 1 1 12m<br />
<br />
$ kubectl get ing<br />
NAME HOSTS ADDRESS PORTS AGE<br />
ingress-cheese myminikube.info,cheeses.all 10.0.2.15 80 12m<br />
</pre><br />
<br />
* Add your host aliases:<br />
$ echo "$(minikube ip) myminikube.info cheeses.all" | sudo tee -a /etc/hosts<br />
<br />
* Now, either using your browser or [[curl]], check that you can reach all of the endpoints defined in the Ingress:<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/cheddar/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/stilton/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null myminikube.info # Should return '200'<br />
<br />
* You can also see the Nginx logs for the above requests with:<br />
$ kubectl --namespace kube-system logs \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller<br />
<br />
* You can also view the Nginx configuration file (and the settings created by the above Ingress) with:<br />
$ NGINX_POD=$(kubectl --namespace kube-system get pods \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller \<br />
--output jsonpath='{.items[0].metadata.name}')<br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- cat /etc/nginx/nginx.conf<br />
<br />
* Get the version of the Nginx Ingress controller installed:<br />
<pre><br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- /nginx-ingress-controller --version<br />
-------------------------------------------------------------------------------<br />
NGINX Ingress controller<br />
Release: 0.19.0<br />
Build: git-05025d6<br />
Repository: https://github.com/kubernetes/ingress-nginx.git<br />
-------------------------------------------------------------------------------<br />
</pre><br />
<br />
==Kubectl==<br />
<br />
<code>kubectl</code> controls the Kubernetes cluster manager.<br />
<br />
* View your current configuration:<br />
$ kubectl config view<br />
<br />
* Switch between clusters:<br />
$ kubectl config use-context <context_name><br />
<br />
* Remove a cluster:<br />
$ kubectl config unset contexts.<context_name><br />
$ kubectl config unset users.<user_name><br />
$ kubectl config unset clusters.<cluster_name><br />
<br />
* Sort Pods by age:<br />
$ kubectl get po --sort-by='{.firstTimestamp}'.<br />
$ kubectl get pods --all-namespaces --sort-by=.metadata.creationTimestamp<br />
<br />
* Backup all primitives deployed in a given k8s cluster:<br />
<pre><br />
$ kubectl api-resources --verbs=list --namespaced -o name \<br />
| xargs -n1 -I{} bash -c "kubectl get {} --all-namespaces -oyaml && echo ---" \<br />
> k8s_backup.yaml<br />
</pre><br />
<br />
===kubectl explain===<br />
<br />
;List the fields for supported resources.<br />
<br />
* Get the documentation of a resource (aka "kind") and its fields:<br />
<pre><br />
$ kubectl explain deployment<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
APIVersion defines the versioned schema of this representation of an<br />
object. Servers should convert recognized schemas to the latest internal<br />
value, and may reject unrecognized values. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources<br />
<br />
kind <string><br />
Kind is a string value representing the REST resource this object<br />
represents. Servers may infer this from the endpoint the client submits<br />
requests to. Cannot be updated. In CamelCase. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds<br />
<br />
metadata <Object><br />
Standard object metadata.<br />
<br />
spec <Object><br />
Specification of the desired behavior of the Deployment.<br />
<br />
status <Object><br />
Most recently observed status of the Deployment<br />
</pre><br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
</pre><br />
<br />
* Get a list of ''all'' allowable fields for a given primitive:<br />
<pre><br />
$ kubectl explain deployment --recursive | head<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
kind <string><br />
metadata <Object><br />
</pre><br />
<br />
* Get documentation ("man page"-style) for a given field in a given primitive:<br />
<pre><br />
$ kubectl explain deployment.status.availableReplicas<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
FIELD: availableReplicas <integer><br />
<br />
DESCRIPTION:<br />
Total number of available pods (ready for at least minReadySeconds)<br />
targeted by this deployment.<br />
</pre><br />
<br />
===Merge kubeconfig files===<br />
<br />
* Reference which kubeconfig files you wish to merge:<br />
$ export KUBECONFIG=$HOME/.kube/dev.yaml:$HOME/.kube/prod.yaml<br />
<br />
* Flatten them:<br />
$ kubectl config view --flatten >> $HOME/.kube/config<br />
<br />
* Unset:<br />
$ unset KUBECONFIG<br />
<br />
Merge complete.<br />
<br />
==Namespaces==<br />
<br />
See: [https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces] in the official documentation.<br />
<br />
; Create a Namespace<br />
<br />
<pre><br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: dev<br />
</pre><br />
<br />
==Pods==<br />
<br />
; Create a Pod that has an Init Container<br />
<br />
In this example, I will create a Pod that has one application Container and one Init Container. The init container runs to completion before the application container starts.<br />
<br />
<pre><br />
$ cat << EOF >init-demo.yml<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: init-demo<br />
labels:<br />
app: demo<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: /usr/share/nginx/html<br />
# These containers are run during pod initialization<br />
initContainers:<br />
- name: install<br />
image: busybox<br />
command:<br />
- wget<br />
- "-O"<br />
- "/work-dir/index.html"<br />
- https://example.com<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: "/work-dir"<br />
dnsPolicy: Default<br />
volumes:<br />
- name: workdir<br />
emptyDir: {}<br />
EOF<br />
</pre><br />
<br />
The above Pod YAML will first create the init container using the busybox image, which will download the HTML of the example.com website and save it to a file (<code>index.html</code>) on the Pod volume called "workdir". After the init container completes, the Nginx container starts and presents the <code>index.html</code> on port 80 (the file is located at <code>/usr/share/nginx/index.html</code> inside the Nginx container as a volume mount).<br />
<br />
* Now, create this Pod:<br />
$ kubectl create --validate -f init-demo.yml<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat << EOF >example.yml<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: example<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: demo<br />
</pre><br />
<br />
* Check that we can get the header of <nowiki>https://example.com</nowiki>:<br />
$ curl -sI $(kubectl get svc/foo-svc -o jsonpath='{.spec.clusterIP}'):8000 | grep ^HTTP<br />
HTTP/1.1 200 OK<br />
<br />
==Deployments==<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' controller provides declarative updates for Pods and ReplicaSets.<br />
<br />
You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.<br />
<br />
; Creating a Deployment<br />
<br />
The following is an example of a Deployment. It creates a ReplicaSet to bring up three [https://hub.docker.com/_/nginx/ Nginx] Pods:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Check the syntax of the Deployment (YAML):<br />
$ kubectl create -f nginx-deployment.yml --dry-run<br />
deployment.apps/nginx-deployment created (dry run)<br />
<br />
* Create the Deployment:<br />
$ kubectl create --record -f nginx-deployment.yml <br />
deployment "nginx-deployment" created<br />
Note: By appending <code>--record</code> to the above command, we are telling the API to record the current command in the annotations of the created or updated resource. This is useful for future review, such as investigating which commands were executed in each Deployment revision.<br />
<br />
* Get information about our Deployment:<br />
$ kubectl get deployments<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 24s<br />
<br />
$ kubectl describe deployment/nginx-deployment<br />
<pre><br />
Name: nginx-deployment<br />
Namespace: default<br />
CreationTimestamp: Tue, 30 Jan 2018 23:28:43 +0000<br />
Labels: app=nginx<br />
Annotations: deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Selector: app=nginx<br />
Replicas: 3 desired | 3 updated | 3 total | 0 available | 3 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 25% max unavailable, 25% max surge<br />
Pod Template:<br />
Labels: app=nginx<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Conditions:<br />
Type Status Reason<br />
---- ------ ------<br />
Available False MinimumReplicasUnavailable<br />
Progressing True ReplicaSetUpdated<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-6c54bd5869 (3/3 replicas created)<br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal ScalingReplicaSet 28s deployment-controller Scaled up replica set nginx-deployment-6c54bd5869 to 3<br />
</pre><br />
<br />
* Get information about the ReplicaSet created by the above Deployment:<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-6c54bd5869 3 3 3 3m<br />
<br />
$ kubectl describe rs/nginx-deployment-6c54bd5869<br />
<pre><br />
Name: nginx-deployment-6c54bd5869<br />
Namespace: default<br />
Selector: app=nginx,pod-template-hash=2710681425<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Annotations: deployment.kubernetes.io/desired-replicas=3<br />
deployment.kubernetes.io/max-replicas=4<br />
deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Controlled By: Deployment/nginx-deployment<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-k9mh4<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-pphjt<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-n4fj5<br />
</pre><br />
<br />
* Get information about the Pods created by this Deployment:<br />
$ kubectl get pods --show-labels -l app=nginx -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE LABELS<br />
nginx-deployment-6c54bd5869-k9mh4 1/1 Running 0 5m 10.244.1.5 k8s.worker1.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-n4fj5 1/1 Running 0 5m 10.244.1.6 k8s.worker2.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-pphjt 1/1 Running 0 5m 10.244.1.7 k8s.worker3.local app=nginx,pod-template-hash=2710681425<br />
<br />
;Updating a Deployment<br />
<br />
Note: A Deployment's rollout is triggered if, and only if, the Deployment's pod template (that is, <code>.spec.template</code>) is changed (for example, if the labels or container images of the template are updated). Other updates, such as scaling the Deployment, do not trigger a rollout.<br />
<br />
Suppose that we want to update the Nginx Pods in the above Deployment to use the <code>nginx:1.9.1</code> image instead of the <code>nginx:1.7.9</code> image.<br />
<br />
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
deployment "nginx-deployment" image updated<br />
<br />
Alternatively, we can edit the Deployment and change <code>.spec.template.spec.containers[0].image</code> from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>:<br />
<br />
$ kubectl edit deployment/nginx-deployment<br />
deployment "nginx-deployment" edited<br />
<br />
* Check on the rollout status:<br />
<pre><br />
$ kubectl rollout status deployment/nginx-deployment<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
deployment "nginx-deployment" successfully rolled out<br />
</pre><br />
<br />
* Get information about the updated Deployment:<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 18m<br />
<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-5964dfd755 3 3 3 1m # <- new ReplicaSet using nginx:1.9.1<br />
nginx-deployment-6c54bd5869 0 0 0 17m # <- old ReplicaSet using nginx:1.7.9<br />
<br />
$ kubectl rollout history deployment/nginx-deployment<br />
deployments "nginx-deployment"<br />
REVISION CHANGE-CAUSE<br />
1 kubectl create --record=true --filename=nginx-deployment.yml<br />
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
<br />
$ kubectl rollout history deployment/nginx-deployment --revision=2<br />
<br />
deployments "nginx-deployment" with revision #2<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=1520898311<br />
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
Containers:<br />
nginx:<br />
Image: nginx:1.9.1<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
<br />
; Rolling back to a previous revision<br />
<br />
Undo the current rollout and rollback to the previous revision:<br />
$ kubectl rollout undo deployment/nginx-deployment<br />
deployment "nginx-deployment" rolled back<br />
<br />
Alternatively, you can rollback to a specific revision by specify that in --to-revision:<br />
$ kubectl rollout undo deployment/nginx-deployment --to-revision=1<br />
deployment "nginx-deployment" rolled back<br />
<br />
==Volume management==<br />
On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. First, when a container crashes, kubelet will restart it, but the files will be lost (i.e., the container starts with a clean state). Second, when running containers together in a Pod it is often necessary to share files between those containers. The Kubernetes ''[https://kubernetes.io/docs/concepts/storage/volumes/ Volumes]'' abstraction solves both of these problems. A Volume is essentially a directory backed by a storage medium. The storage medium and its content are determined by the Volume Type.<br />
<br />
In Kubernetes, a Volume is attached to a Pod and shared among the containers of that Pod. The Volume has the same life span as the Pod, and it outlives the containers of the Pod &mdash; this allows data to be preserved across container restarts.<br />
<br />
Kubernetes resolves the problem of persistent storage with the Persistent Volume subsystem, which provides APIs for users and administrators to manage and consume storage. To manage the Volume, it uses the PersistentVolume (PV) API resource type, and to consume it, it uses the PersistentVolumeClaim (PVC) API resource type.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes PersistentVolume] (PV) : a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims PersistentVolumeClaim] (PVC) : a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Persistent Volume Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).<br />
<br />
A Persistent Volume is a network-attached storage in the cluster, which is provisioned by the administrator.<br />
<br />
Persistent Volumes can be provisioned statically by the administrator, or dynamically, based on the StorageClass resource. A StorageClass contains pre-defined provisioners and parameters to create a Persistent Volume.<br />
<br />
A PersistentVolumeClaim (PVC) is a request for storage by a user. Users request Persistent Volume resources based on size, access modes, etc. Once a suitable Persistent Volume is found, it is bound to a Persistent Volume Claim. After a successful bind, the Persistent Volume Claim resource can be used in a Pod. Once a user finishes its work, the attached Persistent Volumes can be released. The underlying Persistent Volumes can then be reclaimed and recycled for future usage. See [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims Persistent Volumes] for details.<br />
<br />
;Access Modes<br />
* Each of the following access modes ''must'' be supported by storage resource provider (e.g., NFS, AWS EBS, etc.) if they are to be used.<br />
* ReadWriteOnce (RWO) &mdash; volume can be mounted as read/write by one node only.<br />
* ReadOnlyMany (ROX) &mdash; volume can be mounted read-only by many nodes.<br />
* ReadWriteMany (RWX) &mdash; volume can be mounted read/write by many nodes.<br />
A volume can only be mounted using one access mode at a time, regardless of the modes that are supported.<br />
<br />
; Example #1 - Using Host Volumes<br />
As an example of how to use volumes, we can modify our previous "webserver" Deployment (see above) to look like the following:<br />
<br />
$ cat webserver.yml<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: hostvol<br />
mountPath: /usr/share/nginx/html<br />
volumes:<br />
- name: hostvol<br />
hostPath:<br />
path: /home/docker/vol<br />
</pre><br />
<br />
And use the same Service:<br />
$ cat webserver-svc.yml<br />
<pre><br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
</pre><br />
<br />
Then create the deployment and service:<br />
$ kubectl create -f webserver.yml<br />
$ kubectl create -f webserver-svc.yml<br />
<br />
Then, SSH into the webserver and run the following commands<br />
$ minikube ssh<br />
minikube> mkdir -p /home/docker/vol<br />
minikube> echo "Christoph testing" > /home/docker/vol/index.html<br />
minikube> exit<br />
<br />
Get the webserver IP and port:<br />
$ minikube ip<br />
192.168.99.100<br />
$ kubectl get svc/web-service -o json | jq '.spec.ports[].nodePort'<br />
32610<br />
# OR<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
$ curl <nowiki>http://192.168.99.100:32610</nowiki><br />
Christoph testing<br />
<br />
; Example #2 - Using NFS<br />
<br />
* First, create a server to host your NFS server (e.g., <code>`sudo apt-get install -y nfs-kernel-server`</code>).<br />
* On your NFS server, do the following:<br />
$ mkdir -p /var/nfs/general<br />
$ cat << EOF >>/etc/exports<br />
/var/nfs/general 10.100.1.2(rw,sync,no_subtree_check) 10.100.1.3(rw,sync,no_subtree_check) 10.100.1.4(rw,sync,no_subtree_check)<br />
EOF<br />
where the <code>10.x</code> IPs are the private IPs of your k8s nodes (both Master and Worker nodes).<br />
* Make sure to install <code>nfs-common</code> on each of the k8s nodes that will be connecting to the NFS server.<br />
<br />
Now, on the k8s Master node, create a Persistent Volume (PV) and Persistent Volume Claim (PVC):<br />
<br />
* Create a Persistent Volume (PV):<br />
$ cat << EOF >pv.yml<br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 10.100.1.10 # NFS Server's private IP<br />
readOnly: false<br />
EOF<br />
$ kubectl create --validate -f pv.yml<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Available<br />
* Create a Persistent Volume Claim (PVC):<br />
$ cat << EOF >pvc.yml<br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
EOF<br />
$ kubectl create --validate -f pvc.yml<br />
$ kubectl get pvc<br />
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE<br />
nfs-pvc Bound mypv 1Gi RWX<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Bound default/nfs-pvc 11m<br />
<br />
* Create a Pod:<br />
$ cat << EOF >nfs-pod.yml <br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nfs-pod<br />
labels:<br />
name: nfs-pod<br />
spec:<br />
containers:<br />
- name: nfs-ctn<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
volumeMounts:<br />
- name: nfsvol<br />
mountPath: /tmp<br />
restartPolicy: Always<br />
securityContext:<br />
fsGroup: 65534<br />
runAsUser: 65534<br />
volumes:<br />
- name: nfsvol<br />
persistentVolumeClaim:<br />
claimName: nfs-pvc<br />
EOF<br />
$ kubectl create --validate -f nfs-pod.yml<br />
$ kubectl get pods -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE<br />
busybox 1/1 Running 9 2d 10.244.2.22 k8s.worker01.local<br />
<br />
* Get a shell from the <code>nfs-pod</code> Pod:<br />
$ kubectl exec -it nfs-pod -- sh<br />
/ $ df -h<br />
Filesystem Size Used Available Use% Mounted on<br />
172.31.119.58:/var/nfs/general<br />
19.3G 1.8G 17.5G 9% /tmp<br />
...<br />
/ $ touch /tmp/this-is-from-the-pod<br />
<br />
* On the NFS server:<br />
$ ls -l /var/nfs/general/<br />
total 0<br />
-rw-r--r-- 1 nobody nogroup 0 Jan 18 23:32 this-is-from-the-pod<br />
<br />
It works!<br />
<br />
==ConfigMaps and Secrets==<br />
While deploying an application, we may need to pass such runtime parameters like configuration details, passwords, etc. For example, let's assume we need to deploy ten different applications for our customers, and, for each customer, we just need to change the name of the company in the UI. Instead of creating ten different Docker images for each customer, we can just use the template image and pass the customers' names as a runtime parameter. In such cases, we can use the ConfigMap API resource. Similarly, when we want to pass sensitive information, we can use the Secret API resource. Think ''Secrets'' (for confidential data) and ''ConfigMaps'' (for non-confidential data).<br />
<br />
[https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/ ConfigMaps] allow you to decouple configuration artifacts from image content to keep containerized applications portable. Using ConfigMaps, we can pass configuration details as key-value pairs, which can be later consumed by Pods or any other system components, such as controllers. We can create ConfigMaps in two ways:<br />
<br />
* From literal values; and<br />
* From files.<br />
<br />
<br />
;ConfigMaps<br />
<br />
* Create a ConfigMap:<br />
$ kubectl create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2<br />
configmap "my-config" created<br />
$ kubectl get configmaps my-config -o yaml<br />
<pre><br />
apiVersion: v1<br />
data:<br />
key1: value1<br />
key2: value2<br />
kind: ConfigMap<br />
metadata:<br />
creationTimestamp: 2018-01-11T23:57:44Z<br />
name: my-config<br />
namespace: default<br />
resourceVersion: "117110"<br />
selfLink: /api/v1/namespaces/default/configmaps/my-config<br />
uid: 37a43e39-f72b-11e7-8370-08002721601f<br />
</pre><br />
$ kubectl describe configmap/my-config<br />
<pre><br />
Name: my-config<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Data<br />
====<br />
key2:<br />
----<br />
value2<br />
key1:<br />
----<br />
value1<br />
Events: <none><br />
</pre><br />
<br />
; Create a ConfigMap from a configuration file<br />
<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: ConfigMap<br />
metadata:<br />
name: customer1<br />
data:<br />
TEXT1: Customer1_Company<br />
TEXT2: Welcomes You<br />
COMPANY: Customer1 Company Technology, LLC.<br />
EOF<br />
</pre><br />
<br />
We can get the values of the given key as environment variables inside a Pod. In the following example, while creating the Deployment, we are assigning values for environment variables from the customer1 ConfigMap:<br />
<pre><br />
....<br />
containers:<br />
- name: my-app<br />
image: foobar<br />
env:<br />
- name: MONGODB_HOST<br />
value: mongodb<br />
- name: TEXT1<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT1<br />
- name: TEXT2<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT2<br />
- name: COMPANY<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: COMPANY<br />
....<br />
</pre><br />
With the above, we will get the <code>TEXT1</code> environment variable set to <code>Customer1_Company</code>, <code>TEXT2</code> environment variable set to <code>Welcomes You</code>, and so on.<br />
<br />
We can also mount a ConfigMap as a Volume inside a Pod. For each key, we will see a file in the mount path and the content of that file become the respective key's value. For details, see [https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#adding-configmap-data-to-a-volume here].<br />
<br />
You can also use ConfigMaps to configure your cluster to use, as an example, 8.8.8.8 and 8.8.4.4 as its upstream DNS server:<br />
<pre><br />
kind: ConfigMap<br />
apiVersion: v1<br />
metadata:<br />
name: kube-dns<br />
namespace: kube-system<br />
data:<br />
upstreamNameservers: |<br />
["8.8.8.8", "8.8.4.4"]<br />
</pre><br />
<br />
; Secrets<br />
<br />
Objects of type [https://kubernetes.io/docs/concepts/configuration/secret/ Secret] are intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a Secret is safer and more flexible than putting it verbatim in a pod definition or in a docker image.<br />
<br />
As an example, assume that we have a Wordpress blog application, in which our <code>wordpress</code> frontend connects to the [[MySQL]] database backend using a password. While creating the Deployment for <code>wordpress</code>, we can put the MySQL password in the Deployment's YAML file, but the password would not be protected. The password would be available to anyone who has access to the configuration file.<br />
<br />
In situations such as the one we just mentioned, the Secret object can help. With Secrets, we can share sensitive information like passwords, tokens, or keys in the form of key-value pairs, similar to ConfigMaps; thus, we can control how the information in a Secret is used, reducing the risk for accidental exposures. In Deployments or other system components, the Secret object is ''referenced'', without exposing its content.<br />
<br />
It is important to keep in mind that the Secret data is stored as plain text inside etcd. Administrators must limit the access to the API Server and etcd.<br />
<br />
To create a Secret using the <code>`kubectl create secret`</code> command, we need to first create a file with a password, and then pass it as an argument.<br />
<br />
* Create a file with your MySQL password:<br />
$ echo mysqlpasswd | tr -d '\n' > password.txt<br />
<br />
* Create the ''Secret'':<br />
$ kubectl create secret generic mysql-passwd --from-file=password.txt<br />
$ kubectl describe secret/mysql-passwd<br />
<pre><br />
Name: mysql-passwd<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Type: Opaque<br />
<br />
Data<br />
====<br />
password.txt: 11 bytes<br />
</pre><br />
<br />
We can also create a Secret manually, using the YAML configuration file. With Secrets, each object data must be encoded using base64. If we want to have a configuration file for our Secret, we must first get the base64 encoding for our password:<br />
<br />
$ cat password.txt | base64<br />
bXlzcWxwYXNzd2Q==<br />
<br />
and then use it in the configuration file:<br />
<pre><br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysql-passwd<br />
type: Opaque<br />
data:<br />
password: bXlzcWxwYXNzd2Q=<br />
</pre><br />
Note that base64 encoding does not do any encryption and anyone can easily decode it:<br />
<br />
$ echo "bXlzcWxwYXNzd2Q=" | base64 -d # => mysqlpasswd<br />
<br />
Therefore, make sure you do not commit a Secret's configuration file in the source code.<br />
<br />
We can get Secrets to be used by containers in a Pod by mounting them as data volumes, or by exposing them as environment variables.<br />
<br />
We can reference a Secret and assign the value of its key as an environment variable (<code>WORDPRESS_DB_PASSWORD</code>):<br />
<pre><br />
.....<br />
spec:<br />
containers:<br />
- image: wordpress:4.7.3-apache<br />
name: wordpress<br />
env:<br />
- name: WORDPRESS_DB_HOST<br />
value: wordpress-mysql<br />
- name: WORDPRESS_DB_PASSWORD<br />
valueFrom:<br />
secretKeyRef:<br />
name: my-password<br />
key: password.txt<br />
.....<br />
</pre><br />
<br />
Or, we can also mount a Secret as a Volume inside a Pod. A file would be created for each key mentioned in the Secret, whose content would be the respective value. See [https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod here] for details.<br />
<br />
==Ingress==<br />
Among the ServiceTypes mentioned earlier, NodePort and LoadBalancer are the most often used. For the LoadBalancer ServiceType, we need to have the support from the underlying infrastructure. Even after having the support, we may not want to use it for every Service, as LoadBalancer resources are limited and they can increase costs significantly. Managing the NodePort ServiceType can also be tricky at times, as we need to keep updating our proxy settings and keep track of the assigned ports. In this section, we will explore the Ingress API object, which is another method we can use to access our applications from the external world.<br />
<br />
An ''[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingress]'' is a collection of rules that allow inbound connections to reach the cluster Services. With Services, routing rules are attached to a given Service. They exist for as long as the Service exists. If we can somehow decouple the routing rules from the application, we can then update our application without worrying about its external access. This can be done using the Ingress resource. Ingress can provide load balancing, SSL/TLS termination, and name-based virtual hosting and/or routing.<br />
<br />
To allow the inbound connection to reach the cluster Services, Ingress configures a Layer 7 HTTP load balancer for Services and provides the following:<br />
<br />
* TLS (Transport Layer Security)<br />
* Name-based virtual hosting <br />
* Path-based routing<br />
* Custom rules.<br />
<br />
With Ingress, users do not connect directly to a Service. Users reach the Ingress endpoint, and, from there, the request is forwarded to the respective Service. You can see an example of an example Ingress definition below:<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: web-ingress<br />
spec:<br />
rules:<br />
- host: blue.example.com<br />
http:<br />
paths:<br />
- backend: <br />
serviceName: blue-service<br />
servicePort: 80<br />
- host: green.example.com<br />
http:<br />
paths:<br />
- backend:<br />
serviceName: green-service<br />
servicePort: 80<br />
</pre><br />
<br />
According to the example just provided, users requests to both <code>blue.example.com</code> and <code>green.example.com</code> would go to the same Ingress endpoint, and, from there, they would be forwarded to <code>blue-service</code>, and <code>green-service</code>, respectively. Here, we have seen an example of a Name-Based Virtual Hosting Ingress rule. <br />
<br />
We can also have Fan Out Ingress rules, in which we send requests like <code>example.com/blue</code> and <code>example.com/green</code>, which would be forwarded to <code>blue-service</code> and <code>green-service</code>, respectively.<br />
<br />
To secure an Ingress, you must create a ''Secret''. The TLS secret must contain keys named <code>tls.crt</code> and <code>tls.key</code>, which contain the certificate and private key to use for TLS.<br />
<br />
The Ingress resource does not do any request forwarding by itself. All of the magic is done using the ''Ingress Controller''.<br />
<br />
; Ingress Controller<br />
<br />
An Ingress Controller is an application which watches the Master Node's API Server for changes in the Ingress resources and updates the Layer 7 load balancer accordingly. Kubernetes has different Ingress Controllers, and, if needed, we can also build our own. GCE L7 Load Balancer and Nginx Ingress Controller are examples of Ingress Controllers.<br />
<br />
Minikube v0.14.0 and above ships the Nginx Ingress Controller setup as an add-on. It can be easily enabled by running the following command:<br />
<br />
$ minikube addons enable ingress<br />
<br />
Once the Ingress Controller is deployed, we can create an Ingress resource using the <code>kubectl create</code> command. For example, if we create an <code>example-ingress.yml</code> file with the content above, then, we can use the following command to create an Ingress resource:<br />
<br />
$ kubectl create -f example-ingress.yml<br />
<br />
With the Ingress resource we just created, we should now be able to access the blue-service or green-service services using blue.example.com and green.example.com URLs. As our current setup is on minikube, we will need to update the host configuration file on our workstation to the minikube's IP for those URLs:<br />
<br />
$ cat /etc/hosts<br />
127.0.0.1 localhost<br />
::1 localhost<br />
192.168.99.100 blue.example.com green.example.com <br />
<br />
Once this is done, we can now open blue.example.com and green.example.com in a browser and access the application.<br />
<br />
==Labels and Selectors==<br />
''[https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ Labels]'' are key-value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key-value labels defined. Each key must be unique for a given object.<br />
<pre><br />
"labels": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
<br />
;Syntax and character set<br />
<br />
Labels are key-value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (<code>/</code>). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (<code>.</code>), not longer than 253 characters in total, followed by a slash (<code>/</code>). If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. kube-scheduler, kube-controller-manager, kube-apiserver, kubectl, or other third-party automation) which add labels to end-user objects must specify a prefix. The <code>kubernetes.io/</code> prefix is reserved for Kubernetes core components.<br />
<br />
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between.<br />
<br />
;Label selectors<br />
<br />
Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).<br />
<br />
Via a label selector, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.<br />
<br />
The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical AND (<code>&&</code>) operator.<br />
<br />
An empty label selector (that is, one with zero requirements) selects every object in the collection.<br />
<br />
A null label selector (which is only possible for optional selector fields) selects no objects.<br />
<br />
Note: the label selectors of two controllers must not overlap within a namespace, otherwise they will fight with each other.<br />
Note that labels are not restricted to pods. You can apply them to all sorts of objects, such as nodes or services.<br />
<br />
;Examples<br />
<br />
* Label a given node:<br />
$ kubectl label node k8s.worker1.local network=gigabit<br />
<br />
* With ''Equality-based'', one may write:<br />
$ kubectl get pods -l environment=production,tier=frontend<br />
<br />
* Using ''set-based'' requirements:<br />
$ kubectl get pods -l 'environment in (production),tier in (frontend)'<br />
<br />
* Implement the OR operator on values:<br />
$ kubectl get pods -l 'environment in (production, qa)'<br />
<br />
* Restricting negative matching via exists operator:<br />
$ kubectl get pods -l 'environment,environment notin (frontend)'<br />
<br />
* Show the current labels on your pods:<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Add a label to an already running/existing pod:<br />
$ kubectl label pods busybox owner=christoph<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d owner=christoph<br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Select a pod by its label:<br />
$ kubectl get pods --selector owner=christoph<br />
#~OR~<br />
$ kubectl get pods -l owner=christoph<br />
NAME READY STATUS RESTARTS AGE<br />
busybox 1/1 Running 25 9d<br />
<br />
* Delete/remove a given label from a given pod:<br />
$ kubectl label pod busybox owner-<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
<br />
* Get all pods that belong to both the <code>production</code> ''and'' the <code>development</code> environments:<br />
$ kubectl get pods -l 'env in (production, development)'<br />
<br />
; Using Labels to select a Node on which to schedule a Pod:<br />
<br />
* Label a Node that uses SSDs as its primary HDD:<br />
$ kubectl label node k8s.worker1.local hdd=ssd<br />
<br />
<pre><br />
$ cat << EOF >busybox.yml<br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "300"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
nodeSelector: <br />
hdd: ssd<br />
EOF<br />
</pre><br />
<br />
==Annotations==<br />
With ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ Annotations]'', we can attach arbitrary, non-identifying metadata to objects, in a key-value format:<br />
<br />
<pre><br />
"annotations": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.<br />
<br />
In contrast to Labels, annotations are not used to identify and select objects. Annotations can be used to:<br />
<br />
* Store build/release IDs, which git branch, etc.<br />
* Phone numbers of persons responsible or directory entries specifying where such information can be found<br />
* Pointers to logging, monitoring, analytics, audit repositories, debugging tools, etc.<br />
* Etc.<br />
<br />
For example, while creating a Deployment, we can add a description like the one below:<br />
<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
annotations:<br />
description: Deployment based PoC dates 12 January 2018<br />
....<br />
....<br />
</pre><br />
<br />
We can look at annotations while describing an object:<br />
<br />
<pre><br />
$ kubectl describe deployment webserver<br />
Name: webserver<br />
Namespace: default<br />
CreationTimestamp: Fri, 12 Jan 2018 13:18:23 -0800<br />
Labels: app=webserver<br />
Annotations: deployment.kubernetes.io/revision=1<br />
description=Deployment based PoC dates 12 January 2018<br />
...<br />
...<br />
</pre><br />
<br />
==Jobs and CronJobs==<br />
<br />
===Jobs===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#what-is-a-job Job]'' creates one or more pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the Job itself is complete. Deleting a Job will cleanup the pods it created.<br />
<br />
A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).<br />
<br />
A Job can also be used to run multiple Pods in parallel.<br />
<br />
; Example<br />
<br />
* Below is an example ''Job'' config. It computes π to 2000 places and prints it out. It takes around 10 seconds to complete.<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
$ kubctl create -f ./job-pi.yml<br />
job "pi" created<br />
$ kubectl describe jobs/pi<br />
<pre><br />
Name: pi<br />
Namespace: default<br />
Selector: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Annotations: <none><br />
Parallelism: 1<br />
Completions: 1<br />
Start Time: Fri, 12 Jan 2018 13:25:23 -0800<br />
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Containers:<br />
pi:<br />
Image: perl<br />
Port: <none><br />
Command:<br />
perl<br />
-Mbignum=bpi<br />
-wle<br />
print bpi(2000)<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 8s job-controller Created pod: pi-rfvvw<br />
</pre><br />
<br />
* Get the result of the Job run (i.e., the value of π):<br />
$ pods=$(kubectl get pods --show-all --selector=job-name=pi --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
pi-rfvvw<br />
$ kubectl logs ${pods}<br />
3.1415926535897932384626433832795028841971693...<br />
<br />
===CronJobs===<br />
<br />
Support for creating ''Jobs'' at specified times/dates (i.e. cron) is available in Kubernetes 1.4. See [https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ here] for details.<br />
<br />
Below is an example ''CronJob''. Every minute, it runs a simple job to print current time and then echo a "hello" string:<br />
$ cat << EOF >cronjob.yml<br />
apiVersion: batch/v1beta1<br />
kind: CronJob<br />
metadata:<br />
name: hello<br />
spec:<br />
schedule: "*/1 * * * *"<br />
jobTemplate:<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: hello<br />
image: busybox<br />
args:<br />
- /bin/sh<br />
- -c<br />
- date; echo Hello from the Kubernetes cluster<br />
restartPolicy: OnFailure<br />
EOF<br />
<br />
$ kubectl create -f cronjob.yml<br />
cronjob "hello" created<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 <none> 11s<br />
<br />
$ kubectl get jobs --watch<br />
NAME DESIRED SUCCESSFUL AGE<br />
hello-1515793140 1 1 7s<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 22s 48s<br />
<br />
$ pods=$(kubectl get pods -a --selector=job-name=hello-1515793140 --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
hello-1515793140-plp8g<br />
<br />
$ kubectl logs $pods<br />
Fri Jan 12 21:39:07 UTC 2018<br />
Hello from the Kubernetes cluster<br />
<br />
* Cleanup<br />
$ kubectl delete cronjob hello<br />
<br />
==Quota Management==<br />
When there are many users sharing a given Kubernetes cluster, there is always a concern for fair usage. To address this concern, administrators can use the ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ ResourceQuota]'' object, which provides constraints that limit aggregate resource consumption per Namespace.<br />
<br />
We can have the following types of quotas per Namespace:<br />
<br />
* Compute Resource Quota: We can limit the total sum of compute resources (CPU, memory, etc.) that can be requested in a given Namespace.<br />
* Storage Resource Quota: We can limit the total sum of storage resources (PersistentVolumeClaims, requests.storage, etc.) that can be requested.<br />
* Object Count Quota: We can restrict the number of objects of a given type (pods, ConfigMaps, PersistentVolumeClaims, ReplicationControllers, Services, Secrets, etc.).<br />
<br />
==Daemon Sets==<br />
In some cases, like collecting monitoring data from all nodes, or running a storage daemon on all nodes, etc., we need a specific type of Pod running on all nodes at all times. A ''[https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ DaemonSet]'' is the object that allows us to do just that. <br />
<br />
Whenever a node is added to the cluster, a Pod from a given DaemonSet is created on it. When the node dies, the respective Pods are garbage collected. If a DaemonSet is deleted, all Pods it created are deleted as well.<br />
<br />
Example DaemonSet:<br />
<pre><br />
kind: DaemonSet<br />
apiVersion: apps/v1<br />
metadata:<br />
name: pause-ds<br />
spec:<br />
selector:<br />
matchLabels:<br />
quiet: "pod"<br />
template:<br />
metadata:<br />
labels:<br />
quiet: pod<br />
spec:<br />
tolerations:<br />
- key: node-role.kubernetes.io/master<br />
effect: NoSchedule<br />
containers:<br />
- name: pause-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Stateful Sets==<br />
The ''[https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ StatefulSet]'' controller is used for applications which require a unique identity, such as name, network identifications, strict ordering, etc. For example, MySQL cluster, etcd cluster.<br />
<br />
The StatefulSet controller provides identity and guaranteed ordering of deployment and scaling to Pods.<br />
<br />
Note: Before Kubernetes 1.5, the StatefulSet controller was referred to as ''PetSet''.<br />
<br />
==Role Based Access Control (RBAC)==<br />
''[https://kubernetes.io/docs/admin/authorization/rbac/ Role-based access control]'' (RBAC) is an authorization mechanism for managing permissions around Kubernetes resources.<br />
<br />
Using the RBAC API, we define a role which contains a set of additive permissions. Within a Namespace, a role is defined using the Role object. For a cluster-wide role, we need to use the ClusterRole object.<br />
<br />
Once the roles are defined, we can bind them to a user or a set of users using ''RoleBinding'' and ''ClusterRoleBinding''.<br />
<br />
===Using RBAC with minikube===<br />
<br />
* Start up minikube with RBAC support:<br />
$ minikube start --kubernetes-version=v1.9.0 --extra-config=apiserver.Authorization.Mode=RBAC<br />
<br />
* Setup RBAC:<br />
<pre><br />
$ cat rbac-cluster-role-binding.yml<br />
# kubectl create clusterrolebinding add-on-cluster-admin \<br />
# --clusterrole=cluster-admin --serviceaccount=kube-system:default<br />
#<br />
kind: ClusterRoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1alpha1<br />
metadata:<br />
name: kube-system-sa<br />
subjects:<br />
- kind: Group<br />
name: system:sericeaccounts:kube-system<br />
roleRef:<br />
kind: ClusterRole<br />
name: cluster-admin<br />
apiGroup: rbac.authorization.k8s.io<br />
</pre><br />
<br />
<pre><br />
$ cat rbac-setup.yml <br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: viewer<br />
namespace: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: admin<br />
namespace: rbac<br />
</pre><br />
<br />
* Create a Role Binding:<br />
<pre><br />
# kubectl create rolebinding reader-binding \<br />
# --clusterrole=reader \<br />
# --user=serviceaccount:reader \<br />
# --namespace:rbac<br />
#<br />
kind: RoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: rbac<br />
name: reader-binding<br />
roleRef:<br />
apiGroup: rbac.authorization.k8s.io<br />
kind: Role<br />
name: reader<br />
subjects:<br />
- apiGroup: rbac.authorization.k8s.io<br />
kind: ServiceAccount<br />
name: reader<br />
</pre><br />
<br />
* Create a Role:<br />
<pre><br />
$ cat rbac-role.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: default<br />
name: reader<br />
rules:<br />
- apiGroups: [""]<br />
resources: ["*"]<br />
verbs: ["get", "watch", "list"]<br />
</pre><br />
<br />
* Create an RBAC "core reader" Role with specific resources and "verbs" (i.e., the "core reader" role can "get"/"list"/etc. on specific resources (e.g., Pods, Jobs, Deployments, etc.):<br />
<pre><br />
$ cat rbac-role-core-reader.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: core-reader<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods<br />
- configmaps<br />
- secrets<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- jobs<br />
- deployments<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
</pre><br />
<br />
* "Gotchas":<br />
<pre><br />
$ cat rbac-gotcha-1.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-1<br />
rules:<br />
- nonResourceURLs:<br />
- /healthz<br />
verbs:<br />
- get<br />
- post<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- deployments<br />
verbs:<br />
- "*"<br />
</pre><br />
<pre><br />
$ cat rbac-gotcha-2.yml <br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-2<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- secrets<br />
verbs:<br />
- "*"<br />
resourceNames:<br />
- "my_secret"<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods/logs<br />
verbs:<br />
- "get"<br />
</pre><br />
<br />
; Privilege escalation<br />
* You cannot create a Role or ClusterRole that grants permissions you do not have.<br />
* You cannot create a RoleBinding or ClusterRoleBinding that binds to a Role with permissions you do not have (unless you have been explicitly given "bind" permission on the role).<br />
<br />
* Grant explicit bind access:<br />
<pre><br />
kind: ClusterRole<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: role-grantor<br />
rules:<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["rolebindings"]<br />
verbs: ["create"]<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["clusterroles"]<br />
verbs: ["bind"]<br />
resourceNames: ["admin", "edit", "view"]<br />
</pre><br />
<br />
===Testing RBAC permissions===<br />
<br />
* Example of RBAC not allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
no - Required "container.pods.create" permission.<br />
</pre><br />
<br />
* Example of RBAC allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
yes<br />
</pre><br />
<br />
* A more complex example:<br />
<pre><br />
$ kubectl auth can-i update deployments.apps \<br />
--subresource="scale" --as-group="$group" --as="$user" -n $ns<br />
</pre><br />
<br />
==Federation==<br />
With the ''[https://kubernetes.io/docs/concepts/cluster-administration/federation/ Kubernetes Cluster Federation]'' we can manage multiple Kubernetes clusters from a single control plane. We can sync resources across the clusters, and have cross cluster discovery. This allows us to do Deployments across regions and access them using a global DNS record.<br />
<br />
Federation is very useful when we want to build a hybrid solution, in which we can have one cluster running inside our private datacenter and another one on the public cloud. We can also assign weights for each cluster in the Federation, to distribute the load as per our choice.<br />
<br />
==Helm==<br />
To deploy an application, we use different Kubernetes manifests, such as Deployments, Services, Volume Claims, Ingress, etc. Sometimes, it can be tiresome to deploy them one by one. We can bundle all those manifests after templatizing them into a well-defined format, along with other metadata. Such a bundle is referred to as ''Chart''. These Charts can then be served via repositories, such as those that we have for rpm and deb packages. <br />
<br />
''[https://github.com/kubernetes/helm Helm]'' is a package manager (analogous to yum and apt) for Kubernetes, which can install/update/delete those Charts in the Kubernetes cluster.<br />
<br />
Helm has two components:<br />
<br />
* A client called helm, which runs on your user's workstation; and<br />
* A server called tiller, which runs inside your Kubernetes cluster.<br />
<br />
The client helm connects to the server tiller to manage Charts. Charts submitted for Kubernetes are available [https://github.com/kubernetes/charts here].<br />
<br />
==Monitoring and logging==<br />
In Kubernetes, we have to collect resource usage data by Pods, Services, nodes, etc, to understand the overall resource consumption and to take decisions for scaling a given application. Two popular Kubernetes monitoring solutions are Heapster and Prometheus.<br />
<br />
[https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/ Heapster] is a cluster-wide aggregator of monitoring and event data, which is natively supported on Kubernetes. <br />
<br />
[https://prometheus.io/ Prometheus], now part of [https://www.cncf.io/ CNCF] (Cloud Native Computing Foundation), can also be used to scrape the resource usage from different Kubernetes components and objects. Using its client libraries, we can also instrument the code of our application.<br />
<br />
Another important aspect for troubleshooting and debugging is Logging, in which we collect the logs from different components of a given system. In Kubernetes, we can collect logs from different cluster components, objects, nodes, etc. The most common way to collect the logs is using [https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/ Elasticsearch], which uses [https://www.fluentd.org/ fluentd] with custom configuration as an agent on the nodes. fluentd is an open source data collector, which is also part of CNCF.<br />
<br />
[https://github.com/google/cadvisor cAdvisor] is an open source container resource usage and performance analysis agent. It auto-discovers all containers on a node and collects CPU, memory, file system, and network usage statistics. It provides overall machine usage by analyzing the "root" container on the machine. It exposes a simple UI for local containers on port 4194.<br />
<br />
==Security==<br />
===Configure network policies===<br />
A ''[https://kubernetes.io/docs/concepts/services-networking/network-policies/ Network Policy]'' is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.<br />
<br />
''NetworkPolicy'' resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.<br />
<br />
* Specification of how groups of pods may communicate<br />
* Use labels to select pods and define rules<br />
* Implemented by the network plugin<br />
* Pods are non-isolated by default<br />
* Pods are isolated when a Network Policy selects them<br />
<br />
;Example NetworkPolicy<br />
Create a "default" isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods:<br />
<pre><br />
apiVersion: networking.k8s.io/v1<br />
kind: NetworkPolicy<br />
metadata:<br />
name: default-deny<br />
spec:<br />
podSelector: {}<br />
policyTypes:<br />
- Ingress<br />
</pre><br />
<br />
===TLS certificates for cluster components===<br />
Get [https://github.com/OpenVPN/easy-rsa easy-rsa].<br />
<br />
$ ./easyrsa init-pki<br />
$ MASTER_IP=10.100.1.2<br />
$ ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass<br />
<br />
$ cat rsa-request.sh<br />
<pre><br />
#!/bin/bash<br />
./easyrsa --subject-alt-name="IP:${MASTER_IP}," \<br />
"DNS:kubernetes," \<br />
"DNS:kubernetes.default," \<br />
"DNS:kubernetes.default.svc," \<br />
"DNS:kubernetes.default.svc.cluster," \<br />
"DNS:kubernetes.default.svc.cluster.local" \<br />
--days=10000 \<br />
build-server-full server nopass<br />
</pre><br />
<br />
<pre><br />
pki/<br />
├── ca.crt<br />
├── certs_by_serial<br />
│ └── F3A6F7D34BC84330E7375FA20C8441DF.pem<br />
├── index.txt<br />
├── index.txt.attr<br />
├── index.txt.old<br />
├── issued<br />
│ └── server.crt<br />
├── private<br />
│ ├── ca.key<br />
│ └── server.key<br />
├── reqs<br />
│ └── server.req<br />
├── serial<br />
└── serial.old<br />
</pre><br />
<br />
* Figure out what are the paths of the old TLS certs/keys with the following command:<br />
<pre><br />
$ ps aux | grep [a]piserver | sed -n -e 's/^.*\(kube-apiserver \)/\1/p' | tr ' ' '\n'<br />
kube-apiserver<br />
--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota<br />
--requestheader-extra-headers-prefix=X-Remote-Extra-<br />
--advertise-address=172.31.118.138<br />
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt<br />
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt<br />
--requestheader-username-headers=X-Remote-User<br />
--service-cluster-ip-range=10.96.0.0/12<br />
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key<br />
--secure-port=6443<br />
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key<br />
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname<br />
--requestheader-group-headers=X-Remote-Group<br />
--requestheader-allowed-names=front-proxy-client<br />
--service-account-key-file=/etc/kubernetes/pki/sa.pub<br />
--insecure-port=0<br />
--enable-bootstrap-token-auth=true<br />
--allow-privileged=true<br />
--client-ca-file=/etc/kubernetes/pki/ca.crt<br />
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt<br />
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key<br />
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt<br />
--authorization-mode=Node,RBAC<br />
--etcd-servers=http://127.0.0.1:2379<br />
</pre><br />
<br />
===Security Contexts===<br />
A ''[https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Security Context]'' defines privilege and access control settings for a Pod or Container. Security context settings include:<br />
<br />
* Discretionary Access Control: Permission to access an object, like a file, is based on user ID (UID) and group ID (GID).<br />
* Security Enhanced Linux (SELinux): Objects are assigned security labels.<br />
* Running as privileged or unprivileged.<br />
* Linux Capabilities: Give a process some privileges, but not all the privileges of the root user.<br />
* AppArmor: Use program profiles to restrict the capabilities of individual programs.<br />
* Seccomp: Limit a process's access to open file descriptors.<br />
* AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This boolean directly controls whether the <code>no_new_privs</code> flag gets set on the container process. <code>AllowPrivilegeEscalation</code> is true always when the container is: 1) run as Privileged; or 2) has <code>CAP_SYS_ADMIN</code>.<br />
<br />
; Example #1<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: security-context-demo<br />
spec:<br />
securityContext:<br />
runAsUser: 1000<br />
fsGroup: 2000<br />
volumes:<br />
- name: sec-ctx-vol<br />
emptyDir: {}<br />
containers:<br />
- name: sec-ctx-demo<br />
image: gcr.io/google-samples/node-hello:1.0<br />
volumeMounts:<br />
- name: sec-ctx-vol<br />
mountPath: /data/demo<br />
securityContext:<br />
allowPrivilegeEscalation: false<br />
</pre><br />
<br />
==Taints and tolerations==<br />
[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature Node affinity] is a property of pods that ''attracts'' them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite – they allow a node to ''repel'' a set of pods.<br />
<br />
[https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ Taints and tolerations] work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks the node such that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.<br />
<br />
==Remove a node from a cluster==<br />
<br />
* On the k8s Master Node:<br />
k8s-master> $ kubectl drain k8s-worker-02 --ignore-daemonsets<br />
<br />
* On the k8s Worker Node (the one you wish to remove from the cluster):<br />
k8s-worker-02> $ kubeadm reset<br />
[preflight] Running pre-flight checks.<br />
[reset] Stopping the kubelet service.<br />
[reset] Unmounting mounted directories in "/var/lib/kubelet"<br />
[reset] Removing kubernetes-managed containers.<br />
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.<br />
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]<br />
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]<br />
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]<br />
<br />
==Networking==<br />
<br />
; Useful network ranges<br />
* Choose ranges for the Pods and Service CIDR blocks<br />
* Generally, any of the RFC-1918 ranges work well<br />
** 10.0.0.0/8<br />
** 172.0.0.0/11<br />
** 192.168.0.0/16<br />
<br />
Every Pod can communicate directly with every other Pod<br />
<br />
;K8s Node<br />
* A general purpose compute that has at least one interface<br />
** The host OS will have a real-world IP for accessing the machine<br />
** K8s Pods are given ''virtual'' interfaces connected to an internal<br />
** Each nodes has a running network stack<br />
* Kube-proxy runs in the OS to control IPtables for:<br />
** Services<br />
** NodePorts<br />
<br />
;Networking substrate<br />
* Most k8s network stacks allocate subnets for each node<br />
** The network stack is responsible for arbitration of subnets and IPs<br />
** The network stack is also responsible for moving packets around the network<br />
* Pods have a unique, routable IP on the Pod CIDR block<br />
** The CIDR block is ''not'' accessed from outside the k8s cluster<br />
** The magic of IPtables allows the Pods to make outgoing connections<br />
* Ensure that k8s has the correct Pods and Service CIDR blocks<br />
<br />
The Pod network is not seen on the physical network (i.e., it is encapsulated; you will not be able to use <code>tcpdump</code> on it from the physical network)<br />
<br />
;Making the setup easier &mdash; CNI<br />
* Use the Container Network Interface (CNI)<br />
* Relieves k8s from having to have a specific network configuration<br />
* It is activated by supplying <code>--network-plugin=cni, --cni-conf-dir, --cni-bin-dir</code> to kubelet<br />
** Typical configuration directory: <code>/etc/cni/net.d</code><br />
** Typical bin directory: <code>/opt/cni/bin</code><br />
* Allows for multiple backends to be used: linux-bridge, macvlan, ipvlan, Open vSwitch, network stacks<br />
<br />
;Kubernetes services<br />
<br />
* Services are crucial for service discovery and distributing traffic to Pods<br />
* Services act as simple internal load balancers with VIPs<br />
** No access controls<br />
** No traffic controls<br />
* IPtables magically route to virtual IPs<br />
* Internally, Services are used as inter-Pod service discovery<br />
** Kube-DNS publishes DNS record (i.e., <code>nginx.default.svc.cluster.local</code>)<br />
* Services can be exposed in three different ways:<br />
*# ClusterIP<br />
*# LoadBalancer<br />
*# NodePort<br />
<br />
; kube-proxy<br />
* Each k8s node in the cluster runs a kube-proxy<br />
* Two modes: userspace and iptables<br />
** iptables is much more performant (userspace should no longer be used<br />
* kube-proxy has the task of configuring iptables to expose each k8s service<br />
** iptables rules distributes traffic randomly across the endpoints<br />
<br />
===Network providers===<br />
<br />
In order for a CNI plugin to be considered a "[https://kubernetes.io/docs/concepts/cluster-administration/networking/ Network Provider]", it must provide (at the very least) the following:<br />
# All containers can communicate with all other containers without NAT<br />
# All nodes can communicate with all containers (and ''vice versa'') without NAT<br />
# The IP that a containers sees itself as is the same IP that others see it as<br />
<br />
==Linux namespaces==<br />
<br />
* Control group (cgroups)<br />
* Union File Systems<br />
<br />
==Kubernetes inbound node port requirements==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Protocol<br />
!Direction<br />
!Port range<br />
!Purpose<br />
!Used by<br />
!Notes<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Master node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 6443<sup>*</sup> || Kubernetes API server || All<br />
|-<br />
| TCP || Inbound || 2379-2380 || etcd server client API || kube-apiserver, etcd<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10251 || kube-scheduler || Self<br />
|-<br />
| TCP || Inbound || 10252 || kube-controller-manager || Self<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Worker node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 30000-32767 || NodePort Services<sup>**</sup> || All<br />
|}<br />
</div><br />
<br clear="all"/><br />
<sup>**</sup> Default port range for NodePort Services.<br />
<br />
Any port numbers marked with <sup>*</sup> are overridable, so you will need to ensure any custom ports you provide are also open.<br />
<br />
Although etcd ports are included in master nodes, you can also host your own etcd cluster externally or on custom ports.<br />
<br />
The pod network plugin you use (see below) may also require certain ports to be open. Since this differs with each pod network plugin, please see the documentation for the plugins about what port(s) those need.<br />
<br />
==API versions==<br />
<br />
Below is a table showing which value to use for the <code>apiVersion</code> key for a given k8s primitive (note: all values are for k8s 1.8.0, unless otherwise specified):<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Primitive<br />
!apiVersion<br />
|-<br />
| Pod || v1<br />
|-<br />
| Deployment || apps/v1beta2<br />
|-<br />
| Service || v1<br />
|-<br />
| Job || batch/v1<br />
|-<br />
| Ingress || extensions/v1beta1<br />
|-<br />
| CronJob || batch/v1beta1<br />
|-<br />
| ConfigMap || v1<br />
|-<br />
| DaemonSet || apps/v1<br />
|-<br />
| ReplicaSet || apps/v1beta2<br />
|-<br />
| NetworkPolicy || networking.k8s.io/v1<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
You can get a list of all of the API versions supported by your k8s install with:<br />
$ kubectl api-versions<br />
<br />
==Troubleshooting==<br />
<br />
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns<br />
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
* If your container has previously crashed, you can access the previous container’s crash log with:<br />
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}<br />
<br />
==Miscellaneous commands==<br />
<br />
* Simple workflow (not a best practice; use manifest files {YAML} instead):<br />
$ kubectl run nginx --image=nginx:1.10.0<br />
$ kubectl expose deployment nginx --port 80 --type LoadBalancer<br />
$ kubectl get services # <- wait until public IP is assigned<br />
$ kubectl scale deployment nginx --replicas 3<br />
<br />
* Create an Nginx deployment with three replicas without using YAML:<br />
$ kubectl run nginx --image=nginx --replicas=3<br />
<br />
* Take a node out of service for maintenance:<br />
$ kubectl cordon k8s.worker1.local<br />
$ kubectl drain k8s.worker1.local --ignore-daemonsets<br />
<br />
* Return a given node to a service after cordoning and "draining" it (e.g., after a maintenance):<br />
$ kubectl uncordon k8s.worker1.local<br />
<br />
* Get a list of nodes in a format useful for scripting:<br />
$ kubectl get nodes -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get nodes -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get nodes -o json | jq -crM '.items[].metadata.name'<br />
#~OR~ (if using an older version of `jq`)<br />
$ kubectl get nodes -o json | jq '.items[].metadata.name' | tr -d '"'<br />
<br />
* Label a list of nodes:<br />
<pre><br />
for node in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do<br />
kubectl label nodes ${node} instancetype=ondemand;<br />
kubectl label nodes ${node} "example.io/node-lifecycle"=od;<br />
done<br />
</pre><br />
<br />
* Delete a bunch of Pods in "Evicted" state:<br />
$ kubectl get pod -n develop | awk '/Evicted/{print $1}' | xargs kubectl delete pod -n develop<br />
#~OR~<br />
$ kubectl get po -a --all-namespaces -o json | \<br />
jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | <br />
"kubectl delete po \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c<br />
<br />
* Get a random node:<br />
$ NODES=($(kubectl get nodes -o json | jq -crM '.items[].metadata.name'))<br />
$ NUMNODES=${#NODES[@]}<br />
$ echo ${NODES[$[ $RANDOM % $NUMNODES ]]}<br />
<br />
* Get all recent events sorted by their timestamps:<br />
$ kubectl get events --sort-by='.metadata.creationTimestamp'<br />
<br />
* Get a list of all Pods in the default namespace sorted by Node:<br />
$ kubectl get po -o wide --sort-by=.spec.nodeName<br />
<br />
* Get the cluster IP for a service named "foo":<br />
$ kubectl get svc/foo -o jsonpath='{.spec.clusterIP}'<br />
<br />
* List all Services in a cluster and their node ports:<br />
$ kubectl get --all-namespaces svc -o json |\<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Print just the Pod names of those Pods with the label <code>app=nginx</code>:<br />
$ kubectl get --no-headers=true pods -l app=nginx -o custom-columns=:metadata.name<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get --no-headers=true pods -l app=nginx -o name | awk -F "/" '{print $2}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o json | jq -crM '.items [] | .metadata.name'<br />
<br />
* Get a list of all container images used by the Pods in your default namespace:<br />
$ kubectl get pods -o go-template --template='<nowiki>{{range .items}}{{racontainers}}{{.image}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get pods -o go-template="<nowiki>{{range .items}}{{range .spec.containers}}{{.image}}|{{end}}{{end}}</nowiki>" | tr '|' '\n'<br />
<br />
* Get a list of Pods sorted by Node name:<br />
$ kubectl get po -o json | jq -r '.items | sort_by(.spec.nodeName)[] | [.spec.nodeName,.metadata.name] | @tsv'<br />
<br />
* List all Services in a cluster with their endpoints:<br />
$ kubectl get --all-namespaces svc -o json | \<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Get status transitions of each Pod in the default namespace:<br />
$ export tpl='{range .items[*]}{"\n"}{@.metadata.name}{range @.status.conditions[*]}{"\t"}{@.type}={@.status}{end}{end}'<br />
$ kubectl get po -o jsonpath="${tpl}" && echo<br />
<br />
cheddar-cheese-d6d6587c7-4bgcz Initialized=True Ready=True PodScheduled=True<br />
echoserver-55f97d5bff-pdv65 Initialized=True Ready=True PodScheduled=True<br />
stilton-cheese-6d64cbc79-g7h4w Initialized=True Ready=True PodScheduled=True<br />
<br />
* Get a list of all Pods in status "Failed":<br />
$ kubectl get pods -o go-template='<nowiki>{{range .items}}{{if eq .status.phase "Failed"}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get all users in all namespaces:<br />
$ kubectl get rolebindings --all-namepsaces -o go-template \<br />
--template='<nowiki>{{range .items}}{{println}}{{.metadata.namespace}}={{range .subjects}}{{if eq .kind "User"}}{{.name}} {{end}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get the memory limit assigned to a container in a given Pod:<br />
<pre><br />
$ kubectl get pod example-pod-name -n default \<br />
-o jsonpath="{.spec.containers[*].resources.limits}" <br />
</pre><br />
<br />
* Get a Bash prompt of your current context and namespace:<br />
<pre><br />
NORMAL="\[\033[00m\]"<br />
BLUE="\[\033[01;34m\]"<br />
RED="\[\e[1;31m\]"<br />
YELLOW="\[\e[1;33m\]"<br />
GREEN="\[\e[1;32m\]"<br />
PS1_WORKDIR="\w"<br />
PS1_HOSTNAME="\h"<br />
PS1_USER="\u"<br />
<br />
__kube_ps1()<br />
{<br />
CONTEXT=$(kubectl config current-context)<br />
NAMESPACE=$(kubectl config view -o jsonpath="{.contexts[?(@.name==\"${CONTEXT}\")].context.namespace}")<br />
if [ -z "$NAMESPACE"]; then<br />
NAMESPACE="default"<br />
fi<br />
if [ -n "$CONTEXT" ]; then<br />
case "$CONTEXT" in<br />
*prod*)<br />
echo "${RED}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*test*)<br />
echo "${YELLOW}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*)<br />
echo "${GREEN}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
esac<br />
fi<br />
}<br />
<br />
export PROMPT_COMMAND='PS1="${GREEN}${PS1_USER}@${PS1_HOSTNAME}${NORMAL}:$(__kube_ps1)${BLUE}${PS1_WORKDIR}${NORMAL}\$ "'<br />
</pre><br />
<br />
===Client configuration===<br />
<br />
* Setup autocomplete in bash; bash-completion package should be installed first:<br />
$ source <(kubectl completion bash)<br />
<br />
* View Kubernetes config:<br />
$ kubectl config view<br />
<br />
* View specific config items by JSON path:<br />
$ kubectl config view -o jsonpath='{.users[?(@.name == "k8s")].user.password}'<br />
<br />
* Set credentials for foo.kuberntes.com:<br />
$ kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword<br />
<br />
===Viewing / finding resources===<br />
<br />
* List all services in the namespace:<br />
$ kubectl get services<br />
<br />
* List all pods in all namespaces in wide format:<br />
$ kubectl get pods -o wide --all-namespaces<br />
<br />
* List all pods in JSON (or YAML) format:<br />
$ kubectl get pods -o json<br />
<br />
* Describe resource details (node, pod, svc):<br />
$ kubectl describe nodes my-node<br />
<br />
* List services sorted by name:<br />
$ kubectl get services --sort-by=.metadata.name<br />
<br />
* List pods sorted by restart count:<br />
$ kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'<br />
<br />
* Rolling update pods for frontend-v1:<br />
$ kubectl rolling-update frontend-v1 -f frontend-v2.json<br />
<br />
* Scale a ReplicaSet named "foo" to 3:<br />
$ kubectl scale --replicas=3 rs/foo<br />
<br />
* Scale a resource specified in "foo.yaml" to 3:<br />
$ kubectl scale --replicas=3 -f foo.yaml<br />
<br />
* Execute a command in every pod / replica:<br />
$ for i in 0 1; do kubectl exec foo-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done<br />
<br />
* Get a list of ''all'' container IDs running in ''all'' Pods in ''all'' namespaces for a given Kubernetes cluster:<br />
<pre><br />
$ kubectl get pods --all-namespaces \<br />
-o jsonpath='{range .items[*]}{"pod: "}{.metadata.name}{"\n"}{range .status.containerStatuses[*]}{"\tname: "}{.containerID}{"\n\timage: "}{.image}{"\n"}{end}'<br />
<br />
# Example output:<br />
pod: cert-manager-848f547974-8m2k6<br />
name: containerd://358415173310a528a36ca2c19cdc3319f8fd96634c09957977767333b104d387<br />
image: quay.io/jetstack/cert-manager-controller:v1.5.3<br />
</pre><br />
<br />
===Manage resources===<br />
<br />
* Get documentation for pod or service:<br />
$ kubectl explain pods,svc<br />
<br />
* Create resource(s) like pods, services or DaemonSets:<br />
$ kubectl create -f ./my-manifest.yaml<br />
<br />
* Apply a configuration to a resource:<br />
$ kubectl apply -f ./my-manifest.yaml<br />
<br />
* Start a single instance of Nginx:<br />
$ kubectl run nginx --image=nginx<br />
<br />
* Create a secret with several keys:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysecret<br />
type: Opaque<br />
data:<br />
password: $(echo "s33msi4" | base64)<br />
username: $(echo "jane"| base64)<br />
EOF<br />
</pre><br />
<br />
* Delete a resource:<br />
$ kubectl delete -f ./my-manifest.yaml<br />
<br />
===Monitoring and logging===<br />
<br />
* Deploy Heapster from Github repository:<br />
$ kubectl create -f deploy/kube-config/standalone/<br />
<br />
* Show metrics for nodes:<br />
$ kubectl top node<br />
<br />
* Show metrics for pods:<br />
$ kubectl top pod<br />
<br />
* Show metrics for a given pod and its containers:<br />
$ kubectl top pod pod_name --containers<br />
<br />
* Dump pod logs (STDOUT):<br />
$ kubectl logs pod_name<br />
<br />
* Stream pod container logs (STDOUT, multi-container case):<br />
$ kubectl logs -f pod_name -c my-container<br />
<br />
<!-- TODO: https://gist.github.com/so0k/42313dbb3b547a0f51a547bb968696ba --><br />
<br />
===Run tcpdump on containers running in Pods===<br />
<br />
* Find which node/host/IP the Pod in question is running on and also get the container ID:<br />
<pre><br />
$ kubectl describe pod busybox | grep -E "^Node:|Container ID: "<br />
Node: worker2/10.39.32.122<br />
Container ID: docker://a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03<br />
<br />
#~OR~<br />
<br />
$ containerID=$(kubectl get po busybox -o jsonpath='{.status.containerStatuses[*].containerID}' | sed -e 's|docker://||g')<br />
$ hostIP=$(kubectl get po busybox -o jsonpath='{.status.hostIP}')<br />
</pre><br />
<br />
Log into the node/host running the Pod in question and then perform the following steps.<br />
<br />
* Get the virtual interface ID (note it will depend on which Container Network Interface you are using {e.g., veth, cali, etc.}):<br />
<pre><br />
$ docker exec a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03 /bin/sh -c 'cat /sys/class/net/eth0/iflink'<br />
12<br />
<br />
# List all non-virtual interfaces:<br />
$ for iface in $(find /sys/class/net/ -type l ! -lname '*/devices/virtual/net/*' -printf '%f '); do echo "$iface is not virtual"; done<br />
ens192 is not virtual<br />
<br />
# Check if we are using veth or cali or something else:<br />
$ ls -1 /sys/class/net/ | awk '!/docker|lo|ens/{print substr($0,0,4);exit}'<br />
cali<br />
<br />
$ for i in /sys/class/net/veth*/ifindex; do grep -l 12 $i; done<br />
#~OR~<br />
$ for i in /sys/class/net/cali*/ifindex; do grep -l 12 $i; done<br />
/sys/class/net/cali12d4a061371/ifindex<br />
#~OR~<br />
echo $(find /sys/class/net/ -type l -lname '*/devices/virtual/net/*' -exec grep -l 12 {}/ifindex \;) | awk -F'/' '{print $5}'<br />
cali12d4a061371<br />
#~OR~<br />
$ ip link | grep ^12<br />
12: cali12d4a061371@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default<br />
#~OR~<br />
$ ip link | awk '/^12/{print $2}' | awk -F'@' '{print $1}'<br />
cali12d4a061371<br />
</pre><br />
<br />
* Now run [[tcpdump]] on this virtual interface (note: make sure you are running tcpdump on the ''same'' host as the Pod is running on):<br />
$ sudo tcpdump -i cali12d4a061371<br />
<br />
; Self-signed certificates<br />
<br />
If you are using the latest version of <code>kubectl</code> and are running it against a k8s cluster built with a self-signed cert, you can get around any "x509" errors with:<br />
$ export GODEBUG=x509ignoreCN=0<br />
<br />
===API resources===<br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ time for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
<br />
real 1m20.014s<br />
user 0m52.732s<br />
sys 0m17.751s<br />
</pre><br />
<br />
* Note: if you just want a version for a single/given kind:<br />
<pre><br />
$ kubectl explain deploy | head -2<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
</pre><br />
<br />
===kubectl-neat===<br />
<br />
: See: https://github.com/itaysk/kubectl-neat<br />
: See: [[jq]]<br />
<br />
* To easily copy a certificate secret from one namespace to another namespace run:<br />
<pre><br />
$ SOURCE_NAMESPACE=<update-me><br />
$ DESTINATION_NAMESPACE=<update-me><br />
$ kubectl -n ${SOURCE_NAMESPACE} get secret kafka-client-credentials -o json |\<br />
kubectl neat |\<br />
jq 'del(.metadata["namespace"])' |\<br />
kubectl apply -n ${DESTINATION_NAMESPACE} -f -<br />
</pre><br />
<br />
===Get CPU/memory for each node===<br />
<br />
<pre><br />
for node in $(kubectl get nodes -o=jsonpath='{.items[*].metadata.name}'); do<br />
echo "NODE: ${node}"; kubectl describe node ${node} | grep -E '^ cpu |^ memory ';<br />
done<br />
</pre><br />
<br />
===Get vCPU capacity===<br />
<br />
<pre><br />
$ kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"} \<br />
{.status.capacity.cpu}{\"\n\"}{end}"<br />
</pre><br />
<br />
==Miscellaneous examples==<br />
<br />
* Create a Namespace:<br />
<pre><br />
kind: Namespace<br />
apiVersion: v1<br />
metadata:<br />
name: my-namespace<br />
</pre><br />
<br />
; Testing the load balancing capabilities of a Service<br />
<br />
* Create a Deployment with two replicas of Nginx (i.e., 2 x Pods with identical containers, configuration, etc.):<br />
<pre><br />
$ cat << EOF >nginx-deploy.yml<br />
kind: Deployment<br />
apiVersion: apps/v1<br />
metadata:<br />
name: nginx-deploy<br />
spec:<br />
replicas: 2<br />
strategy:<br />
rollingUpdate:<br />
maxSurge: 1<br />
maxUnavailable: 0<br />
type: RollingUpdate<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
$ kubectl create --validate -f nginx-deploy.yml<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deploy 2 2 2 2 1h<br />
$ kubectl get po<br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deploy-8d68fb6cc-bspt8 1/1 Running 1 1h<br />
nginx-deploy-8d68fb6cc-qdvhg 1/1 Running 1 1h<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: nginx-svc<br />
spec:<br />
ports:<br />
- port: 8080<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
<br />
$ kubectl get svc/nginx-svc<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
nginx-svc ClusterIP 10.101.133.100 <none> 8080/TCP 1h<br />
</pre><br />
<br />
* Overwrite the default index.html file (note: This is ''not'' persistent. The original default index.html file will be restored if the Pod fails and the Deployment brings up a new Pod and/or if you modify your Deployment {e.g., upgrade Nginx}. This is just for demonstration purposes):<br />
$ kubectl exec -it nginx-8d68fb6cc-bspt8 -- sh -c 'echo "pod-01" > /usr/share/nginx/html/index.html'<br />
$ kubectl exec -it nginx-8d68fb6cc-qdvhg -- sh -c 'echo "pod-02" > /usr/share/nginx/html/index.html'<br />
<br />
* Get the HTTP status code and server value from the header of a request to the Service endpoint:<br />
$ curl -Is 10.101.133.100:8080 | grep -E '^HTTP|Server'<br />
HTTP/1.1 200 OK<br />
Server: nginx/1.7.9 # <- This is the version of Nginx we defined in the Deployment above<br />
<br />
* Perform a GET request on the Service endpoint (ClusterIP+Port):<br />
<pre><br />
$ for i in $(seq 1 10); do curl -s 10.101.133.100:8080; done<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-02<br />
</pre><br />
Sometimes <code>pod-01</code> responded; sometimes <code>pod-02</code> responded.<br />
<br />
* Perform a GET on the Service endpoint 10,000 times and sum up which Pod responded for each request:<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
5018 pod-01 # <- number of times pod-01 responded to the request<br />
4982 pod-02 # <- number of times pod-02 responded to the request<br />
<br />
real 1m0.639s<br />
user 0m29.808s<br />
sys 0m11.692s<br />
</pre><br />
<br />
$ awk 'BEGIN{print 5018/(5018+4982);}'<br />
0.5018<br />
$ awk 'BEGIN{print 4982/(5018+4982);}'<br />
0.4982<br />
<br />
So, our Service is "load balancing" our two Nginx Pods in a roughly 50/50 fashion.<br />
<br />
In order to double-check that the Service is randomly selecting a Pod to serve the GET request, let's scale our Deployment from 2 to 3 replicas:<br />
$ kubectl scale deploy/nginx-deploy --replicas=3<br />
<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
3392 pod-01<br />
3335 pod-02<br />
3273 pod-03<br />
<br />
real 0m59.537s<br />
user 0m25.932s<br />
sys 0m9.656s<br />
</pre><br />
$ awk 'BEGIN{print 3392/(3392+3335+3273);}'<br />
0.3392<br />
$ awk 'BEGIN{print 3335/(3392+3335+3273);}'<br />
0.3335<br />
$ awk 'BEGIN{print 3273/(3392+3335+3273);}'<br />
0.3273<br />
<br />
Sure enough. Each of the 3 Pods is serving the GET request roughly 33% of the time.<br />
<br />
; Query selections<br />
<br />
* Create a "query selection" file:<br />
<pre><br />
$ cat << EOF >cluster-nodes-health.txt<br />
Name Kernel InternalIP MemoryPressure DiskPressure PIDPressure Ready<br />
.metadata.name .status.nodeInfo.kernelVersion .status.addresses[0].address .status.conditions[0].status .status.conditions[1].status .status.conditions[2].status .status.conditions[3].status<br />
EOF<br />
</pre><br />
<br />
* Use the above "query selection" file:<br />
<pre><br />
$ kubectl get nodes -o custom-columns-file=cluster-nodes-health.txt<br />
Name Kernel InternalIP MemoryPressure DiskPressure PIDPressure Ready<br />
10.10.10.152 5.4.0-1084-aws 10.10.10.152 False False False False<br />
10.10.11.12 5.4.0-1092-aws 10.10.11.12 False False False False<br />
10.10.12.22 5.4.0-1039-aws 10.10.12.22 False False False False<br />
</pre><br />
<br />
==Example YAML files==<br />
<br />
* Basic Pod using busybox:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod using busybox, which also prints out environment variables (including the ones defined in the YAML):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: env-dump<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- env<br />
env:<br />
- name: USERNAME<br />
value: "Christoph"<br />
- name: PASSWORD<br />
value: "mypassword"<br />
</pre><br />
$ kubectl logs env-dump<br />
...<br />
PASSWORD=mypassword<br />
USERNAME=Christoph<br />
...<br />
<br />
* Basic Pod using alpine:<br />
<pre><br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: alpine<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: alpine<br />
image: alpine<br />
command:<br />
- /bin/sh<br />
- "-c"<br />
- "sleep 60m"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod running Nginx:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx-pod<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create a Job that calculates pi up to 2000 decimal places:<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
<br />
* Create a Deployment with two replicas of Nginx running:<br />
<pre><br />
apiVersion: apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
spec:<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
replicas: 2 <br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.9.1<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Create a basic Persistent Volume, which uses NFS:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 172.31.119.58<br />
readOnly: false<br />
</pre><br />
<br />
* Create a Persistent Volume Claim against the above PV:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
</pre><br />
<br />
* Create a Pod using a customer scheduler (i.e., not the default one):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: my-custom-scheduler<br />
annotations:<br />
scheduledBy: custom-scheduler<br />
spec:<br />
schedulerName: custom-scheduler<br />
containers:<br />
- name: pod-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Install k8s cluster manually in the Cloud==<br />
<br />
''Note: For this example, I will be using AWS and I will assume you already have 3 x EC2 instances running CentOS 7 in your AWS account. I will install Kubernetes 1.10.x.''<br />
<br />
* Disable services not supported (yet) by Kubernetes:<br />
$ sudo setenforce 0 # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
<br />
$ sudo systemctl stop firewalld<br />
$ sudo systemctl mask firewalld<br />
$ sudo yum install -y iptables-services<br />
<br />
* Disable swap:<br />
$ sudo swapoff -a # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo vi /etc/fstab # comment out swap line<br />
$ sudo mount -a<br />
<br />
* Make sure routed traffic does not bypass iptables:<br />
$ cat << EOF > /etc/sysctl.d/k8s.conf<br />
net.bridge.bridge-nf-call-ip6tables = 1<br />
net.bridge.bridge-nf-call-iptables = 1<br />
EOF<br />
$ sudo sysctl --system<br />
<br />
* Install <code>kubelet</code>, <code>kubeadm</code>, and <code>kubectl</code> on '''''all''''' nodes in your cluster (both Master and Worker nodes):<br />
<pre><br />
$ cat << EOF > /etc/yum.repos.d/kubernetes.repo<br />
[kubernetes]<br />
name=Kubernetes<br />
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch<br />
enabled=1<br />
gpgcheck=1<br />
repo_gpgcheck=1<br />
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg<br />
EOF<br />
</pre><br />
<br />
$ sudo yum install -y kubelet kubeadm kubectl<br />
$ sudo systemctl enable kubelet && sudo systemctl start kubelet<br />
<br />
* Configure cgroup driver used by kubelet on '''''all''''' nodes (both Master and Worker nodes):<br />
<br />
Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config:<br />
<br />
$ docker info | grep -i cgroup<br />
$ grep -i cgroup /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
If the Docker cgroup driver and the kubelet config do not match, change the kubelet config to match the Docker cgroup driver. The flag you need to change is <code>--cgroup-driver</code>. If it is already set, you can update like so:<br />
<br />
$ sudo sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
Otherwise, you will need to open the systemd file and add the flag to an existing environment line.<br />
<br />
Then restart kubelet:<br />
<br />
$ sudo systemctl daemon-reload<br />
$ sudo systemctl restart kubelet<br />
<br />
* Run <code>kubeadm</code> on Master node:<br />
<br />
K8s requires a pod network to function. We are going to use Flannel, so we need to pass in a flag to the deployment script so k8s knows how to configure itself:<br />
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16<br />
<br />
Note: This command might take a fair amount of time to complete.<br />
<br />
Once it has completed, make note of the "<code>join</code>" command output by <code>kubeadm init</code> that looks something like the following ('''DO NOT RUN THE FOLLOWING COMMAND YET!'''):<br />
# kubeadm join --token --discovery-token-ca-cert-hash sha256:<br />
<br />
You will run that command on the other non-master nodes (aka the "Worker Nodes") to allow them to join the cluster. However, '''do not''' run that command on the worker nodes until you have completed all of the following steps.<br />
<br />
* Create a directory:<br />
$ mkdir -p $HOME/.kube<br />
<br />
* Copy the configuration files to a location usable by the local user:<br />
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config <br />
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config<br />
<br />
* In order for your pods to communicate with one another, you will need to install pod networking. We are going to use Flannel for our Container Network Interface (CNI) because it is easy to install and reliable. <br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</nowiki><br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml</nowiki><br />
<br />
* Make sure everything is coming up properly:<br />
$ kubectl get pods --all-namespaces --watch<br />
Once the <code>kube-dns-xxxx</code> containers are up (i.e., in Status "Running"), your cluster is ready to accept worker nodes.<br />
<br />
* On each of the Worker nodes, run the <code>sudo kubeadm join ...</code> command that <code>kubeadm init</code> created for you (see above).<br />
<br />
* On the Master Node, run the following command:<br />
$ kubectl get nodes --watch<br />
Once the Status of the Worker Nodes returns "Ready", your k8s cluster is ready to use.<br />
<br />
* Example output of successful Kubernetes cluster:<br />
<pre><br />
$ kubectl get nodes<br />
NAME STATUS ROLES AGE VERSION<br />
k8s-01 Ready master 13m v1.10.1<br />
k8s-02 Ready <none> 12m v1.10.1<br />
k8s-03 Ready <none> 12m v1.10.1<br />
</pre><br />
<br />
That's it! You are now ready to start deploying Pods, Deployments, Services, etc. in your Kubernetes cluster!<br />
<br />
==Bash completion==<br />
''Note: The following only works on newer versions. I have tested that this works on version 1.9.1.''<br />
<br />
Add the following line to your <code>~/.bashrc</code> file:<br />
source <(kubectl completion bash)<br />
<br />
==Kubectl plugins==<br />
<br />
SEE: [https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/ Extend kubectl with plugins] for details.<br />
<br />
: FEATURE STATE: Kubernetes v1.11 (alpha)<br />
: FEATURE STATE: Kubernetes v1.15 (stable)<br />
<br />
This section shows you how to install and write extensions for <code>kubectl</code>. Usually called "plugins" or "binary extensions", this feature allows you to extend the default set of commands available in <code>kubectl</code> by adding new sub-commands to perform new tasks and extend the set of features available in the main distribution of <code>kubectl</code>.<br />
<br />
Get code [https://github.com/kubernetes/kubernetes/tree/master/pkg/kubectl/plugins/examples from here].<br />
<br />
<pre><br />
.kube/<br />
└── plugins<br />
└── aging<br />
├── aging.rb<br />
└── plugin.yaml<br />
</pre><br />
<br />
$ chmod 0700 .kube/plugins/aging/aging.rb<br />
<br />
* See options:<br />
<pre><br />
$ kubectl plugin aging --help<br />
Aging shows pods from the current namespace by age.<br />
<br />
Usage:<br />
kubectl plugin aging [flags] [options]<br />
</pre><br />
<br />
* Usage:<br />
<pre><br />
$ kubectl plugin aging<br />
The Magnificent Aging Plugin.<br />
<br />
nginx-deployment-67594d6bf6-5t8m9: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-6kw9j: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-d8dwt: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
</pre><br />
<br />
==Local Kubernetes==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="6" bgcolor="#EFEFEF" | '''Local Kubernetes Comparisons'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Feature<br />
!kind<br />
!k3d<br />
!minikube<br />
!Docker Desktop<br />
!Rancher Desktop<br />
|- <br />
| Free || yes || yes || yes || Personal Small Business* || yes<br />
|--bgcolor="#eeeeee"<br />
| Install || easy || easy || easy || easy || medium (you may encounter odd scenarios)<br />
|-<br />
| Ease of Use || medium || medium || medium || easy || easy<br />
|--bgcolor="#eeeeee"<br />
| Stability || stable || stable || stable || stable || stable<br />
|-<br />
| Cross-platform || yes || yes || yes || yes || yes<br />
|--bgcolor="#eeeeee"<br />
| CI Usage || yes || yes || yes || no || no<br />
|-<br />
| Multiple clusters || yes || yes || yes || no || no<br />
|--bgcolor="#eeeeee"<br />
| Podman support || yes || yes || yes || no || no<br />
|-<br />
| Host volumes mount support || yes || yes || yes (with some performance limitations) || yes || yes (only pre-defined paths)<br />
|--bgcolor="#eeeeee"<br />
| Kubernetes service port-forwarding/mapping || yes || yes || yes || yes || yes<br />
|-<br />
| Pull-through Docker mirror/proxy || yes || yes || no || yes (can reference locally available images) || yes (can reference locally available images)<br />
|--bgcolor="#eeeeee"<br />
| Custom CNI || yes (ex: calico) || yes (ex: flannel) || yes (ex: calico) || no || no<br />
|-<br />
| Features Gates || yes || yes || yes || yes (but not natively; requires hacky setup) || yes (but not natively; requires hacky setup)<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
[https://bmiguel-teixeira.medium.com/local-kubernetes-the-one-above-all-3aedbeb5f3f6 Source]<br />
<br />
==See also==<br />
* [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
* [[Kubernetes/GKE|Google Kubernetes Engine]] (GKE)<br />
* [[Kubernetes/AWS|Kubernetes on AWS]] (EKS)<br />
* [[Kubeless]]<br />
* [[Helm]]<br />
<br />
==External links==<br />
* [http://kubernetes.io/ Official website]<br />
* [https://github.com/kubernetes/kubernetes Kubernetes code] &mdash; via GitHub<br />
===Playgrounds===<br />
* [https://www.katacoda.com/courses/kubernetes/playground Kubernetes Playground]<br />
* [https://labs.play-with-k8s.com Play with k8s]<br />
===Tools===<br />
* [https://github.com/kubernetes/minikube minikube] &mdash; Run Kubernetes locally<br />
* [https://kind.sigs.k8s.io/ kind] &mdash; '''K'''ubernetes '''IN''' '''D'''ocker (local clusters for testing Kubernetes)<br />
* [https://github.com/kubernetes/kops kops] &mdash; Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management<br />
* [https://kubernetes-incubator.github.io/kube-aws kube-aws] &mdash; a command-line tool to create/update/destroy Kubernetes clusters on AWS<br />
* [https://github.com/kubernetes-incubator/kubespray kubespray] &mdash; Deploy a production ready kubernetes cluster<br />
* [https://rook.io/ Rook.io] &mdash; File, Block, and Object Storage Services for your Cloud-Native Environments<br />
===Resources===<br />
* [https://kubernetes.io/docs/getting-started-guides/scratch/ Creating a Custom Cluster from Scratch]<br />
* [https://github.com/kelseyhightower/kubernetes-the-hard-way Kubernetes The Hard Way]<br />
* [http://k8sport.org/ K8sPort]<br />
* [https://k8s.af/ Kubernetes Failure Stories]<br />
<br />
===Training===<br />
* [https://kubernetes.io/training/ Official Kubernetes Training Website]<br />
** Kubernetes and Cloud Native Associate (KCNA)<br />
** Certified Kubernetes Application Developer (CKAD)<br />
** Certified Kubernetes Administrator (CKA)<br />
** Certified Kubernetes Security Specialist (CKS) [note: Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS.]<br />
* [https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals Kubernetes Fundamentals] (LFS258)<br />
** ''[https://www.cncf.io/certification/expert/ Certified Kubernetes Administrator]'' (PKA) certification.<br />
* [https://killer.sh/ CKS / CKA / CKAD Simulator]<br />
* [https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/ 11 Ways (Not) to Get Hacked]<br />
<br />
===Blog posts===<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727 Understanding kubernetes networking: pods] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82 Understanding kubernetes networking: services] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-ingress-1bc341c84078 Understanding kubernetes networking: ingress] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b Kubernetes ConfigMaps and Secrets - Part 1] &mdash; by Sandeep Dinesh, 2017-07-13<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-part-2-3dc37111f0dc Kubernetes ConfigMaps and Secrets - Part 2] &mdash; by Sandeep Dinesh, 2017-08-08<br />
* [https://abhishek-tiwari.com/10-open-source-tools-for-highly-effective-kubernetes-sre-and-ops-teams/ 10 open-source Kubernetes tools for highly effective SRE and Ops Teams]<br />
* [https://www.ianlewis.org/en/tag/kubernetes Series of blog posts about k8s] &mdash; by Ian Lewis<br />
* [https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?] &mdash; by Sandeep Dinesh, 2018-03-11<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:DevOps]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Travel_Log&diff=8278Category:Travel Log2023-07-20T01:41:58Z<p>Christoph: /* Miscellaneous (North America) */</p>
<hr />
<div>This category will be my, as yet, unorganised '''Travel Log''' to many places around the world. (Note: The following is very much an ''incomplete'' travel log.)<br />
<br />
== Auto ==<br />
<br />
===Berlin trip (2006)===<br />
* Monaco &rarr; Milano &rarr; Ljubljana &rarr; Rotterdam &rarr; Berlin &rarr; Copenhagen &rarr; Monaco: April 2006<br />
: [http://triptracker.net/trip/1165/ TripTracker]<br />
: 1-Apr-2006 (14h20): Monaco &rarr; Milano<br />
: 2-Apr-2006 (23h30): Milano &rarr; Ljubljana<br />
: 3-Apr-2006 &ndash; 5-Apr-2006: Slovenia (Ljubljana, Novo Mesto, Kranj, Postojna, Jesenice, etc.)<br />
: 5-Apr-2006 (12h30): |&larr; Austria (Villach)<br />
: 5-Apr-2006 (15h15): |&larr; Germany<br />
: 5-Apr-2006 (19h15): Stuttgart<br />
: 5-Apr-2006 (20h20): Karlsruhe<br />
: 5-Apr-2006 (23h30): Köln<br />
: 5-Apr-2006 (00h10): |&larr; The Netherlands<br />
: 5-Apr-2006 (02h00): Rotterdam<br />
: 7-Apr-2006 (12h00): |&rarr; Rotterdam<br />
: 7-Apr-2006 (14h45): |&larr; Germany<br />
: 7-Apr-2006 (17h00): Hannover<br />
: 7-Apr-2006 (18h30): Magdeburg<br />
: 7-Apr-2006 (20h00): Berlin<br />
: 8-Apr-2006 (15h30): |&rarr; Berlin<br />
: 8-Apr-2006 (18h00): Rostock<br />
: 8-Apr-2006 (19h30): Ferry (|&rarr; Germany from Rostock Harb.)<br />
: 8-Apr-2006 (21h15): Ferry (|&larr; Denmark at Gedsen)<br />
: 8-Apr-2006 (23h20): København<br />
: 9-Apr-2006 (06h30): |&rarr; København<br />
: 9-Apr-2006 (09h00): Ferry (|&rarr; Denmark from Gedsen)<br />
: 9-Apr-2006 (11h00): Ferry (|&larr; Germany at Rostock Harb.)<br />
: 9-Apr-2006 (13h30): |&larr; Berlin<br />
: 9-Apr-2006 (14h00): |&rarr; Berlin<br />
: 9-Apr-2006 (15h50): Dresden<br />
:10-Apr-2006 (00h45): |&larr; Slovenia<br />
:10-Apr-2006 (01h40): Ljubljana<br />
:10-Apr-2006 (02h40): Postojna<br />
:10-Apr-2006 (13h15): |&larr; Italy<br />
:10-Apr-2006 (15h00): Padova<br />
:10-Apr-2006 (15h40): Verona<br />
:10-Apr-2006 (18h50): Genova<br />
:10-Apr-2006 (20h35): |&larr; France<br />
:10-Apr-2006 (20h45): |&larr; Monaco<br />
<br />
===Canada trip (2001)===<br />
''Note: The total trip covered 11,893 km (7,390 miles).''<br />
*Corvallis, OR &rarr; Boston, MA &rarr; Quebec &rarr; Ontario &rarr; Manitoba &rarr; Saskatchewan &rarr; Alberta &rarr; British Columbia &rarr; Corvallis, OR<br />
** 01-Sep-2001 (??h??): |&rarr; Corvallis, OR<br />
** 06-Sep-2001 (15h45): |&larr; Massachusetts<br />
** 13-Sep-2001 (13h15): |&rarr; Westborough, MA<br />
** 13-Sep-2001 (17h46): Augusta, ME<br />
** 13-Sep-2001 (18h15): |&larr; CANADA (into Quebec)<br />
** 14-Sep-2001 (02h06): Grande Allee Est., Quebec<br />
** 14-Sep-2001 (15h01): Cap-Madeleine, PQ<br />
** 15-Sep-2001 (17h44): Thunder Bay, ON<br />
** 14-Sep-2001 (17h45): |&larr; Ontario<br />
** 14-Sep-2001 (20h03): Cobden, ON<br />
** 15-Sep-2001 (12h02): Sudbury, ON<br />
** 15-Sep-2001 (10h25): Wawa, ON<br />
** 15-Sep-2001 (22h01): Kenora, ON<br />
** 15-Sep-2001 (10h37): |&larr; Manitoba<br />
** 16-Sep-2001 (10h53): Brandon, MB<br />
** 16-Sep-2001 (12h50): |&larr; Saskatchewan<br />
** 16-Sep-2001 (16h09): Herbert, SK<br />
** 16-Sep-2001 (18h06): |&larr; Alberta<br />
** 16-Sep-2001 (23h00): |&larr; British Columbia<br />
** 17-Sep-2001 (00h30): |&larr; USA (into Idaho)<br />
** 17-Sep-2001 (03h36): Coeur d'Alene, ID<br />
** 17-Sep-2001 (05h30): |&larr; Oregon<br />
<br />
===Ireland trip (1999-2000)===<br />
* 26-Dec-1999 (??h??): Dublin, Ireland<br />
* 26-Dec-1999 (16h13): Lord Edward St., Dublin<br />
* 27-Dec-1999 (??h??): Kinlay House, Christchurch, 2-12 Lord Edward St., Dublin, Ireland<br />
* 2?-Dec-1999 (??h??): Kilkenny<br />
* 28-Dec-1999 (12h27): Patrick St., Cork<br />
* 28-Dec-1999 (17h12): Mallow, Co. Cork<br />
* 29-Dec-1999 (??h??): Co. Kerry<br />
* ??-Dec-1999 (??h??): Saratoga House (Bed & Breakfast), Muckross Road, Killarney, Ireland<br />
* 29-Dec-1999 (15h09): Chapel St., Limerick<br />
* 29-Dec-1999 (15h18): Eimear<br />
* 30-Dec-1999 (??h??): Ballybofey<br />
* 30-Dec-1999 (15h51): Greysteel<br />
* 30-Dec-1999 (??h??): O'Connell St., Sligo<br />
* 30-Dec-1999 (??h??): Petra, Galway<br />
* 30-Dec-1999 (??h??): Sligo<br />
* 30-Dec-1999 (??h??): The Linen House Backpackers Hostel, 18-20 Kent Street, Belfast, Ireland<br />
* 01-Jan-2000 (14h46): Arthur Sq., Belfast<br />
* 02-Jan-2000 (06h34): Dublin Airport<br />
<br />
===Miscellaneous (Europe)===<br />
* Budapest, Hungary &rarr; Dubrovnik, Croatia: June/July 2018 (round-trip)<br />
* ''The Cliffs of Møn'', DK: Oct-2005<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Chiemsee, Germany: Oct-1996 (round-trip)<br />
* Zagreb, Croatia &rarr; Ljubjlana, Slovenia &rarr; Graz, Austria &rarr; Budapest, Hungary: Sep-1996<br />
* Zagreb, Croatia &rarr; Ljubljana, Slovenia: Sep-1996 (round-trip)<br />
* Budapest, Hungary &rarr; Zagreb, Croatia: Sep-1996<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Berchtesgaden, Germany &rarr; Innsbruck, Austria &rarr; Liechtenstein &rarr; Switzerland: Aug-1996 (round-trip)<br />
* Warsaw, Poland &rarr; Budapest, Hungary: September 1994<br />
* Budapest, Hungary &rarr; Slovakia (11-Nov-1993) &rarr; Warsaw, Poland: November 1993<br />
* Vienna, Austria &rarr; Budapest, Hungary: 28-Sep-1993<br />
<br />
===Miscellaneous (South America)===<br />
* Cuenca, Ecuador &rarr; Riobamba, Ecuador &rarr; Ambato, Ecuador &rarr; Quito, Ecuador: 1993 (round-trip)<br />
* Quito, Ecuador &#187; Ipiales, Colombia: 1993 (round-trip)<br />
* Guayaquil, Ecuador &rarr; Santo Domingo de Los Colorados, Ecuador &rarr; Quito, Ecuador: 1993<br />
* Guayaquil, Ecuador &rarr; Salinas, Ecuador: 1993 (round-trip)<br />
* Tumbes, Peru &rarr; Guayaquil, Ecuador: 21-Dec-1992<br />
<br />
===Miscellaneous (North America)===<br />
* Seattle, WA &#187; Chelan, WA &#187; Seattle, WA: July 2023 (576 km/358 mi)<br />
* Seattle, WA &#187; Cle Elum, WA &#187; Chelan, WA &#187; Republic, WA &#187; Leavenworth, WA &#187; Monroe, WA &#187; Seattle, WA: April 2023 (933 km/580 mi)<br />
* Seattle, WA &#187; Winthrop, WA &#187; Leavenworth, WA &#187; Issaquah, WA &#187; Seattle, WA: June 2022<br />
* Seattle, WA &#187; Winthrop, WA &#187; Tiger, WA &#187; Spokane, WA &#187; Seattle, WA: May 2022 (1,200 km/744 mi)<br />
* Seattle, WA &#187; Portland, OR &#187; Grants Pass, OR &#187; Crescent City, CA &#187; Redwood National Forest &#187; Newport, OR &#187; Astoria, OR &#187; Elma, WA &#187; Seattle, WA: November 2021 (1,881 km/1,169 mi)<br />
* Seattle, WA &#187; Mt Saint Helens &#187; Mt Adams &#187; Stonehenge Memorial &#187; Multnomah Falls &#187; Seattle, WA: September 2021 (914 km/568 mi)<br />
* Seattle, WA &#187; Walla Walla, OR &#187; Joseph, OR &#187; Lewiston, ID &#187; Grand Coulee, WA &#187; Seattle, WA: June 2021 (1,421 km/883 mi)<br />
* Seattle, WA &#187; Pendleton, OR &#187; Craters of the Moon National Monument & Preserve &#187; Idaho Springs, ID &#187; Jackson, WY &#187; Grand Teton National Park &#187; Yellowstone National Park &#187; Missoula, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: September 2020 (2,746 km/1,706 mi)<br />
* Seattle, WA &#187; Coeur d'Alene, ID &#187; Missoula, MT &#187; Glacier National Park, MT &#187; Seattle, WA: July 2019 (1,984 km/1,233 mi)<br />
* Seattle, WA &#187; Corvallis, OR: November 2018 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2017 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2016 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2015 (round-trip)<br />
* Texas &#187; Oklahoma &#187; Kansas &#187; Nebraska &#187; South Dakota &#187; Wyoming &#187; Montana &#187; Idaho &#187; Seattle, WA: September 2015 (4,000 km/4,290 mi)<br />
* Seattle, WA &#187; Oregon &#187; Idaho &#187; Utah &#187; Wyoming &#187; Colorado &#187; Kansas &#187; Oklahoma &#187; Texas: 11-16 May 2013<br />
* Seattle, WA &#187; Port Angeles, WA &#187; Hurricane Ridge, WA: 28-Dec-2012 (round-trip)<br />
* Seattle, WA &#187; Portland, OR: 4-Dec-2012 (round-trip)<br />
* Chicago, IL &#187; Milwaukee, WI &#187; Minneapolis, MN &#187; Fargo, ND &#187; Billings, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: 25-26 June 2012 (3,357 km/2,086 mi)<br />
* St. Louis, MO &#187; Chicago, IL: 31-Dec-2011<br />
* Chicago, IL &#187; St. Louis, MO: 5-Jul-2011<br />
* Milwaukee, WI &#187; Chicago, IL: 30-Jun-2011<br />
* Pittsburgh, PA &#187; New York City, NY: April 2005 (round-trip)<br />
* Pittsburgh, PA &#187; Bethlehem, PA &#187; Westborough, MA &#187; New York City, NY: December 2004 (round-trip)<br />
* Pittsburgh, PA &#187; Boston, MA: November 2004 (round-trip)<br />
* Corvallis, OR &#187; Salt Lake City, UT &#187; Houston, TX &#187; Atlanta, GA &#187; Pittsburgh, PA: September 2004<br />
* Corvallis, OR &#187; Boston, MA: 2001, 2002 (round-trip)<br />
* Corvallis, OR &#187; Vancouver, BC, Canada (round-trip)<br />
* Corvallis, OR &#187; Tijuana, Mexico: 7-Sep-1999 (round-trip)<br />
* Los Angeles, CA &#187; Corvallis, OR: January 1998<br />
* Houston, TX &#187; Milwaukee, WI &#187; Menominee, MI: May 1995 (round-trip)<br />
<br />
== Bus / Train / Ferry ==<br />
===Spain trip (2006)===<br />
* Monaco &#187; Cannes &#187; Marseille &#187; Montpellier St-Ro &#187; Barcelona; April 2006 (round-trip)<br />
** 24-Apr-06 18h35: |&rarr; Nice, France [SNCF train]<br />
** 24-Apr-06 19h00: Antibes, FR<br />
** 24-Apr-06 19h07: Cannes, FR<br />
** 24-Apr-06 19h30: B. sur-Mer, FR<br />
** 24-Apr-06 19h39: San Raphael-Valescure, FR<br />
** 24-Apr-06 20h14: Les Arcs-Drag., FR<br />
** 24-Apr-06 20h56: Toulon, FR<br />
** 24-Apr-06 21h35: Marseille, FR<br />
** 25-Apr-06 15h05: |&rarr; Marseille, FR<br />
** 25-Apr-06 16h16: Nîmes, FR<br />
** 25-Apr-06 17h21: Montpellier St-Ro, FR<br />
** 25-Apr-06 18h42: Béziers, FR<br />
** 25-Apr-06 19h35: Perpignan, FR<br />
** 25-Apr-06 20h15: Portbou, Spain (ES) [''border'']<br />
** 25-Apr-06 22h30: Barcelona, ES<br />
** 27-Apr-06 19h24: |&rarr; Barcelona, ES [Renfe train]<br />
** 27-Apr-06 22h05: Cerbere, FR [''border'']<br />
** 28-Apr-06 08h37: Nice, FR<br />
** 28-Apr-06 10h00: Monaco<br />
<br />
===Miscellaneous (Europe)===<br />
* Tallinn, Estonia &rarr; Helsinki, Finland: January 2020 (round-trip)<br />
* Lisbon, Portugal &rarr; Porto, Portugal: Nov-2016 (round-trip)<br />
* København, DK &#187; Berlin, D: 09-Apr-2006 [+Ferry]<br />
* Berlin, D &#187; København, DK: 08-Apr-2006 (15h15) [+Ferry]<br />
* Ljubljana, Slovenia &#187; Villach HBF, Austria: 18-Aug-1997<br />
* Stockholm C &#187; Oslo S: 15-Aug-1997 (SJ train)<br />
* Salzburg, Austria &#187; Ljubljana, Slovenia: 25-Aug-1997 (&#214;sterreichische Bundesbahnen train (&#214;BB))<br />
* Haslev, DK &#187; Næstved, DK: 24-Aug-1997 (DSB train)<br />
* København &#187; Stockholm C: 14-Aug-1997 (DSB train)<br />
* Oslo S &#187; Bergen: 16-Aug-1997<br />
* Næstved, DK &#187; Rødby Færge, DK: 24-Aug-1997<br />
* Salzburg HBF &#187; Villach HBF (&uuml;ber Schwarzach-St. veit Bad Gastein): 25-Aug-1997 (&#214;BB train)<br />
* Oslo S &#187; Trondheim: 18-Aug-1997<br />
* Grensen (Scandinavia): 16-Aug-1997<br />
* Abisko Turiststation - STF: 20-Aug-1997<br />
* Abisko Turiststation - STF: 21-Aug-1997<br />
* Germany: 24-Aug-1997 (DB train)<br />
* Stockholm S:T Eriksgatan: 15-Aug-1997<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Jun-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Mar-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: (28-Nov-1997/30-Nov-1997) (round-trip)<br />
* Budapest, Hungary &rarr; Ljubljana, Slovenia: 8-Nov-1996<br />
* Budapest, Hungary &rarr; Slovakia: 18-Aug-1995 (round-trip)<br />
* Budapest, Hungary &rarr; Vienna, Austria: 9-Feb-1995 (round-trip)<br />
* Moscow, Russia &rarr; Warsaw, Poland: Sep-1994<br />
* Moscow, Russia &rarr; Brest, Belarus: Aug-1994 (round-trip)<br />
* Moscow, Russia &rarr; Minsk, Belarus: Jul-1994 (round-trip)<br />
* Warsaw, Poland &#187; Moscow, Russia: Jun-1994<br />
* Warsaw, Poland &rarr; Vilnius, Lithuania &rarr; Riga, Latvia: (12-Jan-1994/??-Jan-1994) (round-trip)<br />
<br />
===Miscellaneous (South America)===<br />
* Arequipa, Peru &rarr; Lima, Peru: 1992<br />
* Arequipa, Peru &rarr; Iquique, Chile: (17-Jul-1992/20-Jul-1992) (round-trip)<br />
* Lima, Peru &rarr; Arequipa, Peru: 1992<br />
* Lima, Peru &rarr; La Paz, Bolivia: (19-May-1991/6-Jun-1991) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (29-Nov-1990/11-Dec-1990) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (6-Jul-1990/20-Jul-1990) (round-trip)<br />
<br />
==Flights==<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX): March 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): February 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2022 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): August 2022 [RT]<br />
* Kyiv, Ukraine (KBP) ✈ Frankfurt, Germany (FRA) ✈ Seattle, WA (SEA): December 2021<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ Frankfurt, Germany (FRA) ✈ Kyiv, Ukraine (KBP): December 2021<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2021 [RT]<br />
* Memphis, TN (MEM) ✈ Atlanta, GA (ATL) ✈ Seattle, WA (SEA): June 2021<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC) ✈ Memphis, TN (MEM): June 2021<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): May 2021 [RT]<br />
* Tallinn, Estonia (TLL) ✈ Stockholm, Sweden (ARN) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): January 2020<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ København, DK (CPH) ✈ Helsinki, Finland (HEL) ✈ Tallinn, Estonia (TLL): December 2019<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): October 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Miami, FL (MIA): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): August 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Denver, CO (DEN): May 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Charlotte, NC (CLT): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Santa Ana, CA (SNA): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): September 2018 [RT]<br />
* Budapest, Hungary (BUD) ✈ Brussels, Belgium (BRU) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): July 2018<br />
* Seattle, WA (SEA) ✈ Toronto, Canada (YYZ) ✈ Budapest, Hungary (BUD): June 2018<br />
* Seattle, WA (SEA) ✈ Reno, NV (RNO): May 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Reykjavík, Iceland (RKV): December 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Kona, Hawaii (KOA): September 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC): August 2017 [RT]<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): November 2016<br />
* Lisbon, Portugal ✈ Amsterdam, NL (AMS): November 2016<br />
* Paris, FR (CGD) ✈ Lisbon, Portugal: November 2016<br />
* Seattle, WA (SEA) ✈ Paris, FR (CDG): November 2016<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): November 2016 [RT]<br />
* Seattle, WA (SEA) ✈ Las Vegas, NV (LAS): June 2016 [RT]<br />
* Houston, TX (IAH) ✈ Seattle, WA (SEA): September 2015 [RT]<br />
* Houston, TX (IAH) ✈ San Francisco, CA (SFO): August 2015 [RT]<br />
* Houston, TX (IAH) ✈ Madison, WI (MSN): March 2015 [RT]<br />
* Houston, TX (IAH) ✈ Amsterdam, NL (AMS): March 2015 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee (MKE): June 2011<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX) ✈ Chicago, IL (ORD): October 2010 [RT]<br />
* Seattle, WA (SEA) ✈ Los Angeles, CA (LAX): December 2007 [RT]<br />
* København, DK (CPH) ✈ Seattle, WA (SEA): June 2006<br />
* Heathrow, UK ✈ København, DK (CPH): June 2006<br />
* Nice, FR ✈ Heathrow, UK: June 2006<br />
* København, DK (CPH) ✈ Nice, FR (NCE): February 2006<br />
* Washington Dulles ✈ København, DK: August 2005<br />
* Pittsburgh, PA (PIT) ✈ Washington Dulles: August 2005<br />
* Portland, OR (PDX) ✈ Pittsburgh, PA (PIT): Summer 2004 [RT]<br />
* Eugene, OR ✈ Houston, TX (IAH): February 2002 [RT]<br />
* Portland, OR (PDX) ✈ Boston, MA: December 2002 [RT]<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): January 2000<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): January 2000<br />
* Dublin, Ireland ✈ Amsterdam, NL (AMS): January 2000<br />
* Amsterdam (AMS) ✈ Dublin, Ireland: December 1999<br />
* Seattle, WA (SEA) ✈ Amsterdam, NL (AMS): December 1999<br />
* Portland, OR (PDX) ✈ Seattle, WA (SEA): December 1999<br />
* Chicago (ORD) ✈ Los Angeles (LAX): December 1997<br />
* Greenbay, WI (GRB) ✈ Chicago (ORD): December 1997<br />
* Chicago (ORD) ✈ Greenbay, WI (GRB): December 1997<br />
* Rome, Italy (FCO) ✈ Chicago, IL (ORD): December 1997<br />
* Trieste, Italy (TRS) ✈ Rome, Italy (FCO): December 1997<br />
* Houston, TX (IAH) ✈ Budapest, Hungary (BUD): July 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: June 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: March 1996 [RT]<br />
* Narita, Japan ✈ Taipei, Taiwan: December 1995 [RT]<br />
* Los Angeles, CA (LAX) ✈ Narita, Japan: October 1995<br />
* Houston, TX (IAH) ✈ Los Angeles (LAX): October 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): September 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): May 1995 [RT]<br />
* Paris, FR (CGD) ✈ Vienna, Austria: September 1993<br />
* Quito, Ecuador ✈ Caracas, Venezuela (CCS) ✈ Paris, France: 1993<br />
* Lima, Peru ✈ Tumbes, Peru: December 1992<br />
* Boston, MA ✈ Miami, FL ✈ Lima, Peru: <br />
* Amsterdam, NL (AMS) ✈ Chicago, IL (ORD): <br />
* Boston, MA ✈ Amsterdam, NL (AMS):<br />
<br />
== Individual Places ==<br />
=== Ireland ===<br />
* Dublin<br />
** '''Dublin''' (Baile &Ntilde;tha Cliath)<br />
* Kildare<br />
** Naas<br />
* Laois<br />
* Carlow<br />
** Carlow (Ceatharlach)<br />
** Royal Oak<br />
* Kilkenny<br />
** '''Kilkenny''' (Cill Chainnigh)<br />
** Callan<br />
* Tipperary<br />
** Glenbower<br />
** Clonmel (Cluian Meala)<br />
** Cahir<br />
** Burncourt<br />
* Cork<br />
** Fermoy<br />
** '''Cork''' (Coroaigh)<br />
** Fota<br />
** Cobh (An C&oacute;bh)<br />
** '''Blarney'''<br />
** Macroom<br />
** Ballyvourney<br />
* Kerr<br />
** ''Derrynasaggart Mts''<br />
** Poulgorm Br<br />
** '''Killarney''' (Cill Airne)<br />
** Farranfore<br />
* Limerick<br />
** Abbeyfeale<br />
** ''Mullaghareirk Mts''<br />
** Newcastle West<br />
** Croagh<br />
** '''Limerick''' (Luimneach)<br />
* Clare<br />
** Bunratty<br />
** Ennis (Inis)<br />
** Ennistymon<br />
** Liscannor<br />
** ''Cliffs of Moher''<br />
** Doolin<br />
** Lisdoonvarna<br />
** Ballyvaughan<br />
** Bealaclugga<br />
** Burren<br />
* Galway<br />
** Kinvarra<br />
** Ballinderreen<br />
** Oranmore<br />
** '''Galway''' (Gaillimh)<br />
** Claregalway<br />
** Tuam<br />
* Mayo<br />
** Claremorris<br />
** Cloonfallagh<br />
** Charlestown<br />
* Sligo<br />
** Curry<br />
** Tubbercurry<br />
** Collooney<br />
** '''Sligo''' (Sligeach)<br />
** ''Dartry Mts''<br />
* Leitrim<br />
* Donegal<br />
** Bundoran<br />
** Ballyshannon<br />
** Donegal (D&uacute;n na nGall)<br />
** Ballybofey<br />
** Clady<br />
* Tyrone<br />
** '''Strabane''' (Northern Ireland)<br />
* Londonderry<br />
** Derry (Londonderry)<br />
** Eglinton<br />
** Ballykelly<br />
** Limavady<br />
** Coleraine<br />
* Antrim<br />
** Derrykelghan<br />
** Moss-side<br />
** Ballycastle<br />
** ''Antrim Hills''<br />
** Ballintoy<br />
** ''Carrick-a-Rede Rope Bridge''<br />
** ''Giants Causeway''<br />
** Craignamaddy<br />
** Ballymoney<br />
** Ballymena<br />
** Antrim<br />
** ''Lough Neagh'' (lake)<br />
** Dunadry<br />
** Newtownabbey<br />
** '''Belfast'''<br />
* Down<br />
** Lisburn<br />
** Banbridge<br />
* Armagh<br />
** Newry<br />
* Louth<br />
** Dundalk (Dun Dealgan)<br />
** Dunleen<br />
** Drogheda (Droichead Atha)<br />
* Meath<br />
** Julianstown<br />
* Dublin<br />
** Balbriggan<br />
** Swords<br />
<br />
[[Category:World Travels]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Books&diff=8277Category:Books2023-06-28T04:31:29Z<p>Christoph: /* Titles (completed) */</p>
<hr />
<div>My love of books runs deep. I try to read for at least an hour every day (books unrelated to my studies). This category will contain a list of the books I have read or [[Summer Reading List|am reading]].<br />
<br />
==Titles (completed)==<br />
''Note: These are a list of books I have read in their entirety. This is nowhere near a complete list and the following list is in no particular order.''<br />
<br />
#'''''From Dawn to Decadence: 1500 to the Present: 500 Years of Western Cultural Life''''' &mdash; by Jacques Barzun<br />
#'''''The Invention of Science: The Scientific Revolution from 1500 to 1750''''' &mdash; by David Wootton<br />
#'''''Predictably Irrational: The Hidden Forces That Shape Our Decisions''''' &mdash; by Dan Ariely (2008)<br />
#'''''The Tyranny of Experts: Economists, Dictators, and the Forgotten Rights of the Poor''''' &mdash; by William Easterly<br />
#'''''The Origins of Political Order: From Prehuman Times to the French Revolution''''' &mdash; by Francis Fukuyama<br />
#'''''Political Order and Political Decay: From the Industrial Revolution to the Globalization of Democracy''''' &mdash; by Francis Fukuyama<br />
#'''''Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World''''' &mdash; by Bruce Schneier<br />
#'''''Superintelligence: Paths, Dangers, Strategies''''' &mdash; by Nick Bostrom<br />
#'''''Smashing Physics''''' &mdash; by Jon Butterworth<br />
#'''''The History of the Ancient World: From the Earliest Accounts to the Fall of Rome''''' &mdash; by Susan Wise Bauer<br />
#'''''The History of the Medieval World: From the Conversion of Constantine to the First Crusade''''' &mdash; by Susan Wise Bauer<br />
#'''''The History of the Renaissance World: From the Rediscovery of Aristotle to the Conquest of Constantinople''''' &mdash; by Susan Wise Bauer<br />
#'''''The Well Educated Mind: A Guide to the Classical Education You Never Had''''' &mdash; by Susan Wise Bauer<br />
#'''''The Story of Western Science: From the Writings of Aristotle to the Big Bang Theory''''' &mdash; by Susan Wise Bauer (2015)<br />
#'''''Countdown to Zero Day''''' &mdash; by Kim Zetter<br />
#'''''The Revenge of Geography''''' &mdash; by Robert D. Kaplan<br />
#'''''The Master of Disguise''''' &mdash; by Antonio J. Mendez<br />
#'''''To Explain the World: The Discovery of Modern Science''''' &mdash; by Steven Weinberg (2015)<br />
#'''''The Fall of the Roman Empire''''' &mdash; by Peter Heather<br />
#'''''The Shadow Factory''''' &mdash; by James Bamford<br />
#'''''Operation Shakespeare''''' &mdash; by John Shiffman<br />
#'''''No Place to Hide''''' &mdash; by Glenn Greenwald<br />
#'''''Neanderthal Man: In Search of Lost Genomes''''' &mdash; by Svante Pääbo (2014)<br />
#'''''Constantine the Emperor''''' &mdash; by David Potter<br />
#'''''A Troublesome Inheritance''''' &mdash; by Nicholas Wade<br />
#'''''The Selfish Gene''''' &mdash; by Richard Dawkins<br />
#'''''The 4-Hour Workweek: Escape 9-5, Live Anywhere, and Join the New Rich''''' &mdash; by [http://www.fourhourworkweek.com/blog/about/ Timothy Ferriss] (2007)<br />
#'''''Hackers: Heroes of the Computer Revolution''''' &mdash; by Steven Levy<br />
#'''''Wealth, Poverty, and Politics: An International Perspective''''' &mdash; Thomas Sowell<br />
#'''''The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win''''' &mdash; by Gene Kim, Kevin Behr, George Spafford<br />
#'''''Paper: Paging Through History''''' &mdash; by Mark Kurlansky<br />
#'''''Salt: A World History''''' &mdash; by Mark Kurlansky<br />
#'''''Guns, Germs, and Steel: The Fates of Human Societies''''' &mdash; by Jared Diamond (1997)<br />
#'''''Collapse: How Societies Choose to Fail or Succeed''''' &mdash; by Jared Diamond (2005)<br />
#'''''The Better Angels of Our Nature: Why Violence Has Declined''''' &mdash; by Steven Pinker<br />
#'''''How to Win Friends & Influence People''''' &mdash; by Dale Carnegie (1936)<br />
#'''''[[The True Believer: Thoughts on the Nature of Mass Movements]]''''' &mdash; Eric Hoffer (1951)<br />
#'''''An Economic History of the World since 1400''''' &mdash; by Professor Donald J. Harreld<br />
#'''''The End of the Cold War 1985-1991''''' &mdash; by Robert Service<br />
#'''''Iron Kingdom: The Rise and Downfall of Prussia, 1600-1947''''' &mdash; by Christopher Clark<br />
#'''''[https://www.goodreads.com/book/show/12158480-why-nations-fail Why Nations Fail: The Origins of Power, Prosperity, and Poverty]''''' &mdash; by Daron Acemoğlu and James A. Robinson (2012)<br />
#'''''The Six Wives of Henry VIII''''' &mdash; by Alison Weir (1991)<br />
#'''''The Demon-Haunted World: Science as a Candle in the Dark''''' &mdash; by Carl Sagan (1996)<br />
#'''''Dark Territory: The Secret History of Cyber War''''' &mdash; by Fred Kaplan (2016)<br />
#'''''A Brief History of Britain 1066-1485''''' &mdash; by Nicholas Vincent (2012)<br />
#'''''The History of Science: 1700-1900''''' &mdash; by Professor Frederick Gregory (2003)<br />
#'''''Heart of Europe: A History of the Holy Roman Empire''''' &mdash; by Peter H. Wilson (2016)<br />
#'''''[[The Story of Civilization]] - Volume 2: The Life of Greece''''' &mdash; by Will Durant (1939)<br />
#'''''The Story of Civilization - Volume 3: Caesar and Christ''''' &mdash; by Will Durant (1944)<br />
#'''''The Story of Civilization - Volume 4: The Age of Faith''''' &mdash; by Will Durant (1950)<br />
#'''''Red Sparrow''''' &mdash; by Jason Matthews (2013)<br />
#'''''Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time''''' &mdash; by Dava Sobel (1995)<br />
#'''''The Medici: Power, Money, and Ambition in the Italian Renaissance''''' &mdash; by Paul Strathern (2016)<br />
#'''''The Venetians: A New History: From Marco Polo to Casanova''''' &mdash; by Paul Strathern (2013)<br />
#'''''The Rise of Athens: The Story of the World's Greatest Civilization''''' &mdash; by Anthony Everitt (2016)<br />
#'''''Red Mars''''' &mdash; by Kim Stanley Robinson (1993)<br />
#'''''The Clockwork Universe: Isaac Newton, The Royal Society, and the Birth of the Modern World''''' &mdash; by Edward Dolnick (2011)<br />
#'''''The Skeptics' Guide to the Universe: How to Know What's Really Real in a World Increasingly Full of Fake''''' &mdash; by Steven Novella (2018)<br />
#'''''New Thinking: From Einstein to Artificial Intelligence, the Science and Technology That Transformed Our World''''' &mdash; by Dagogo Altraide (2019)<br />
#'''''Flashpoints: The Emerging Crisis in Europe''''' &mdash; by George Friedman (2015)<br />
#'''''The War on Science: Who's Waging It, Why It Matters, What We Can Do About It''''' &mdash; by Shawn Lawrence Otto (2016)<br />
#'''''Permanent Record''''' &mdash; by Edward Snowden (2019)<br />
#'''''Mythos: The Greek Myths Reimagined''''' &mdash; by Stephen Fry (2019)<br />
#'''''Heroes: The Greek Myths Reimagined''''' &mdash; by Stephen Fry (2020)<br />
#'''''Troy: The Greek Myths Reimagined''''' &mdash; by Stephen Fry (2021)<br />
#'''''I Contain Multitudes: The Microbes Within Us and a Grander View of Life''''' &mdash; by Ed Yong (2016)<br />
#'''''How to Read a Book''''' &mdash; by Mortimer J. Adler and Charles Van Doren (1940)<br />
#'''''The Order: A Novel''''' &mdash; by Daniel Silva (2020)<br />
#'''''How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need''''' &mdash; by Bill Gates (2020)<br />
#'''''The Horse, the Wheel, and Language: How Bronze-Age Riders from the Eurasian Steppes Shaped the Modern World''''' &mdash; by David W. Anthony (2007)<br />
#'''''The Map of Knowledge: A Thousand-Year History of How Classical Ideas Were Lost and Found''''' &mdash; by Violet Moller (2019)<br />
#'''''Sapiens: A Brief History of Humankind''''' &mdash; by Yuval Noah Harari (2015)<br />
#'''''The Ascent of Money: A Financial History of the World''''' &mdash; by Niall Ferguson (2008)<br />
#'''''Civilization: The West and the Rest''''' &mdash; by Niall Ferguson (2011)<br />
#'''''Empire: How Britain Made the Modern World''''' &mdash; by Niall Ferguson (2017)<br />
#'''''The Square and the Tower: Networks and Power, from the Freemasons to Facebook''''' &mdash; by Niall Ferguson (2018)<br />
#'''''The House of Rothschild, Volume 1: Money's Prophets: 1798-1848''''' &mdash; by Niall Ferguson (2019)<br />
#'''''Doom: The Politics of Catastrophe''''' &mdash; by Niall Ferguson (2021)<br />
#'''''The Accidental Superpower: The Next Generation of American Preeminence and the Coming Global Disorder''''' &mdash; by Peter Zeihan (2014)<br />
#'''''The Strange Death of Europe: Immigration, Identity, Islam''''' &mdash; by Douglas Murray (2017)<br />
#'''''The War on the West''''' &mdash; by Douglas Murray (2022)<br />
#'''''12 Rules for Life: An Antidote to Chaos''''' &mdash; by Jordan B. Peterson (2018)<br />
#'''''The Historian''''' &mdash; by Elizabeth Kostova (2009)<br />
#'''''The Battle of Bretton Woods: John Maynard Keynes, Harry Dexter White, and the Making of a New World Order''''' &mdash; by Benn Steil (2013)<br />
#'''''The Gates of Europe: A History of Ukraine''''' &mdash; by Serhii Plokhy (2015)<br />
<br />
==Titles (textbooks)==<br />
''Note: These are some of the textbooks I not only read in their entirety whilst in university, but studied them thoroughly. This is very much an incomplete list.''<br />
<br />
#'''''X-ray Structure Determination''''' &mdash; by Stout and Jensen<br />
#'''''Inferring Phylogenies''''' &mdash; by Joseph Felsenstein, Sinauer Associates, Inc. (2003)<br />
#'''''A Biologist's Guide to Analysis of DNA Microarray Data'''''<br />
#'''''Molecular Cell Biology''''' &mdash; by Scott MP, Matsudaira P, Lodish H, Darnell J, Zipursky L, Kaiser CA, Berk A, and Krieger M. W. H. Freeman, 5th Edition (2003)<br />
#'''''Guide to Analysis of DNA Microarray Data''''' &mdash; by Knudsen S, 2nd Edition (2004)<br />
#'''''General Chemistry''''' &mdash; by Darrell D. Ebbing and Steven D. Gammon, Houghton Mifflin Company, Boston, 6th Edition (1999)<br />
#'''''Organic Chemistry''''' &mdash; by Paula Yurkanis Bruice, Prentice Hall, New Jersey, 3rd Edition (2001)<br />
#'''''Principles and Techniques for an Integrated Chemistry Laboratory''''' &mdash; by David A. Aikens, ''et. al.'', Waveland Press, Inc., Prospect Heights (1984)<br />
#'''''Physical Chemistry''''' &mdash; by Peter Atkins and Julio de Paula, W.H. Freeman and Company, New York, 7th Edition (2002)<br />
#'''''Biochemistry''''' &mdash; by Christopher K. Mathews, K. E. van Holde, and Kevin G. Ahern, Addison Wesley Longman, San Fransisco, 3rd Edition (2000)<br />
#'''''Biology''''' &mdash; by Neil A. Campbell, The Benjamin/Cummings Publishing Company, Inc., Redwood City, 5th Edition (1999)<br />
#'''''Essential Cell Biology''''' &mdash; by Bruce Alberts, ''et. al.'', Garland Publishing, Inc. New York (1998)<br />
#'''''Genetics: From Genes to Genomes''''' &mdash; by Leland H. Hartwell, ''et. al.'', McGraw-Hill Companies, Inc. Boston (2000)<br />
#'''''Evolution: An Introduction''''' &mdash; by Stephen C. Stearns and Rolf F. Hoekstra, Oxford University Press, Oxford (2000)<br />
#'''''Physics for Scientists and Engineers''''' &mdash; by Saunders College Publishing, Philadelphia, 5th Edition (2000)<br />
#'''''Physical Biochemistry''''' &mdash; by Kensal E. van Holde, W. Curtis Johnson, and P. Shing Ho, Prentice Hall, New Jersey (1998)<br />
#'''''Object-Oriented Software Development Using Java''''' &mdash; by Xiaoping Jia, Addison-Wesley, 2nd Edition<br />
#'''''Calculus''''' &mdash; by James Stewart<br />
#'''''Calculus: Early Transcendentals''''' &mdash; by James Stewart<br />
#'''''Single Variable Calculus: Early Transcendentals''''' &mdash; by James Stewart<br />
<br />
==Titles (uncategorized)==<br />
''Note: These are some of my favourite books that I have read. I have read others, but these stood out to me. This does not mean, in any way, that I necessarily agree with everything these books have to say; they just interested me.''<br />
#'''''The History of the Decline and Fall of the Roman Empire''''' &mdash; by Edward Gibbon (1776-1788) [http://www.gutenberg.org/browse/authors/g#a375][http://en.wikipedia.org/wiki/Outline_of_The_History_of_the_Decline_and_Fall_of_the_Roman_Empire]<br />
#'''''The House of Intellect''''' &mdash; by Jacques Barzun<br />
#'''''[http://librivox.org/thus-spake-zarathustra-by-friedrich-nietzsche/ Also sprach Zarathustra]''''' ("Thus Spoke Zarathustra") &mdash; by Friedrich Nietzsche (1883-5)<br />
#'''''Jenseits von Gut und Böse''''' ("Beyond Good and Evil") &mdash; by Friedrich Nietzsche (1886)<br />
#'''''Zur Genealogie der Moral''''' ("On the Genealogy of Morals") &mdash; by Friedrich Nietzsche (1887)<br />
#'''''Götzen-Dämmerung''''' ("Twilight of the Idols") &mdash; by Friedrich Nietzsche (1888)<br />
#'''''[http://librivox.org/the-antichrist-by-nietzsche/ Der Antichrist]''''' ("The Antichrist") &mdash; by Friedrich Nietzsche (1888)<br />
#'''''Ecce Homo''''' &mdash; by Friedrich Nietzsche (1888)<br />
#'''''Vom Nutzen und Nachtheil der Historie für das Leben '''''("On the Use and Abuse of History for Life") &mdash; by Friedrich Nietzsche (1874)<br />
#'''''Die Traumdeutung''''' ("The Interpretation of Dreams") &mdash; by Sigmund Freud (1899)<br />
#'''''Das Ich und das Es''''' ("The Ego and the Id") &mdash; by Sigmund Freud (1923)<br />
#'''''Die Zukunft einer Illusion''''' ("The Future of an Illusion") &mdash; by Sigmund Freud (1927) <br />
#'''''Das Unbehagen in der Kultur''''' ("Civilization and Its Discontents") &mdash; by Sigmund Freud (1929)<br />
#'''''[[:wikipedia:A History of the English-Speaking Peoples|A History of the English-Speaking Peoples]]''''' &mdash; by Winston Churchill (1956–58)<br />
#'''''The Notebooks of Don Rigoberto''''' &mdash; by Mario Vargas Llosa<br />
#'''''Die Waffen nieder!''''' ("Lay Down Your Arms!") &mdash; Baroness Bertha von Suttner (1889)<br />
#'''''Europe's Optical Illusion''''' (also: "The Great Illusion") &mdash; Sir Norman Angell (1909)<br />
#'''''Night''''' &mdash; by Elie Wiesel (1960)<br />
#'''''The End of Faith: Religion, Terror, and the Future of Reason''''' &mdash; by Sam Harris<br />
#'''''The Lexus and the Olive Tree: Understanding Globalization''''' &mdash; by Thomas L. Friedman<br />
#'''''The World Is Flat: A Brief History of the Twenty-first Century''''' &mdash; Thomas L. Friedman<br />
#'''''The Case For Goliath: How America Acts As The World's Government in the Twenty-first Century''''' &mdash; by Michael Mandelbaum<br />
#'''''Caesar's Commentaries: On the Gallic War And on the Civil War''''' &mdash; by Julius Caesar<br />
#'''''Cem Escovadas Antes de Ir para Cama''''' ("One Hundred Strokes of the Brush before Bed") &mdash; by Melissa Panarello<br />
#'''''Coryat's Crudities: Hastily gobled up in Five Moneth's Travels''''' &mdash; by Thomas Coryat (1611)<br />
#'''''Italian Hours''''' &mdash; by Henry James (1909)<br />
#'''''Italienische Reise''''' ("Italian Journey") &mdash; by Johann Wolfgang von Goethe (1816/1817).<br />
#'''''Diarios de motocicleta''''' ("The Motorcycle Diaries") &mdash; by Che Guevara (1951).<br />
#'''''The Prince of Tides''''' &mdash; by Pat Conroy (1986).<br />
#'''''Il Nome Della Rosa''''' ("The Name of the Rose") &mdash; by Umberto Eco (1980).<br />
#'''''Il Pendolo di Foucault''''' ("Foucault's Pendulum") &mdash; by Umberto Eco (1988).<br />
#'''''The Book of the Courtier''''' ("Il Cortegiano") &mdash; by Baldassare Castiglione (1528) [http://en.wikipedia.org/wiki/Sprezzatura].<br />
#'''''One Hundred Years of Solitude''''' &mdash; by Gabriel Garcia Marquez<br />
#'''''The Unbearable Lightness of Being: A Novel''''' &mdash; by Milan Kundera<br />
#'''''The Book of Laughter and Forgetting''''' &mdash; by Milan Kundera<br />
#'''''Masters of Rome''''' (series) &mdash; by Colleen McCullough<br />
#'''''The Wishing Game''''' &mdash; by Patrick Redmond<br />
#'''''The Measure Of All Things: The Seven-Year Odyssey and Hidden Error That Transformed the World''''' &mdash; by By Ken Alder (2002)<br />
#'''''De la démocratie en Amérique''''' ("On Democracy in America") &mdash; by Alexis de Tocqueville (1835)<br />
#'''''The Anatomy of Revolution''''' &mdash; by Crane Brinton (1938)<br />
#'''''God and Gold: Britain, America, and the Making of the Modern World''''' &mdash; by Walter Russell Mead (2007)<br />
#'''''Black Mass: Apocalyptic Religion and the Death of Utopia''''' &mdash; by John Gray (2007)<br />
#'''''The Grand Chessboard: American Primacy and Its Geostrategic Imperatives''''' &mdash; by Zbigniew Brzezinski (1998)<br />
#'''''Kim''''' &mdash; by Rudyard Kipling (1901)<br />
#'''''The Lotus and the Wind''''' &mdash; by John Masters<br />
<br />
==Authors (uncategorized)==<br />
*[[wikipedia:Aldous Huxley|Aldous Huxley]] &mdash; [[Wikiquote:Aldous Huxley]]<br />
*[[wikipedia:Edgar Allen Poe|Edgar Allen Poe]] &mdash; [[Wikiquote:Edgar Allen Poe]]<br />
*[[wikipedia:Oscar Wilde|Oscar Wilde]] &mdash; [[Wikiquote:Oscar Wilde]]<br />
*[[wikipedia:George Orwell|George Orwell]] &mdash; [[Wikiquote:George Orwell]]<br />
*[[wikipedia:William Shakespeare|William Shakespeare]] &mdash; [[Wikiquote:William Shakespeare]]<br />
*[[wikipedia:Thomas Jefferson|Thomas Jefferson]] &mdash; [[Wikiquote:Thomas Jefferson]]<br />
*[[wikipedia:Mark Antony|Mark Antony]] &mdash; [[Wikiquote:Mark Antony]]<br />
*[[wikipedia:Jane Austen|Jane Austen]] &mdash; [[Wikiquote:Jane Austen]] ([http://en.wikipedia.org/wiki/Free_indirect_speech])<br />
*[[wikipedia:Albert Einstein|Albert Einstein]] &mdash; [[Wikiquote:Albert Einstein]]<br />
*[[Friedrich Nietzsche]] &mdash; [[Wikiquote:Friedrich Nietzsche]]<br />
*[[wikipedia:Sigmund Freud|Sigmund Freud]] &mdash; [[Wikiquote:Sigmund Freud]]<br />
*[[wikipedia:Plato|Plato]] &mdash; [[Wikiquote:Plato]]<br />
*[[wikipedia:Aristotle|Aristotle]] &mdash; [[Wikiquote:Aristotle]]<br />
*[[wikipedia:Baruch Spinoza|Baruch Spinoza]] (Benedictus de Spinoza; 1632–1677) &mdash; [[Wikiquote:Baruch Spinoza]]<br />
*[[wikipedia:Georg Wilhelm Friedrich Hegel|Georg Wilhelm Friedrich Hegel]] &mdash; [[Wikiquote:Georg Wilhelm Friedrich Hegel]]<br />
*[[wikipedia:Niccolò Machiavelli|Niccolò Machiavelli]] &mdash; [[Wikiquote:Niccolò Machiavelli]]<br />
*[[wikipedia:Immanuel Kant|Immanuel Kant]] &mdash; [[Wikiquote:Immanuel Kant]]<br />
*[[wikipedia:Lord Byron|Lord Byron]] (George Gordon Byron, 6th Baron Byron) &mdash; [[Wikiquote:Lord Byron]]<br />
*[[wikipedia:Mary Shelley|Mary Shelley]] &mdash; [[Wikiquote:Mary Shelley]]<br />
*[[wikipedia:Percy Bysshe Shelley|Percy Bysshe Shelley]] &mdash; [[Wikiquote:Percy Bysshe Shelley]]<br />
*[[wikipedia:Christopher Marlowe|Christopher Marlowe]] (1564–1593): English dramatist and poet. &mdash; [[Wikiquote:Christopher Marlowe]]<br />
*[[wikipedia:Francis Bacon|Francis Bacon]] &mdash; [[Wikiquote:Francis Bacon]]<br />
*[[wikipedia:Eric Hoffer|Eric Hoffer]] &mdash; [[Wikiquote:Eric Hoffer]]<br />
*[[wikipedia:Milton Friedman|Milton Friedman]] &mdash; [[Wikiquote:Milton Friedman]]<br />
*[[wikipedia:Roger Bacon|Roger Bacon]] (c. 1214-1294) &mdash; [[wikiquote:Roger Bacon]]<br />
*[[wikipedia:Charles Baudelaire|Charles Baudelaire]] (1821-1867) &mdash; [[wikiquote:Charles Baudelaire]]<br />
<br />
=== Authors (I have not read yet) ===<br />
* [[wikipedia:Simone De Beauvoir|Simone De Beauvoir]] (1908–1986): French existentialist, writer, and social essayist. (Author of ''The Necessity of Atheism'' [http://www.spartacus.schoolnet.co.uk/PRshelley.htm].)<br />
* [[wikipedia:Jeremy Bentham|Jeremy Bentham]] (1748–1832): British jurist, eccentric, philosopher and social reformer, founder of utilitarianism. He had [[wikipedia:John Stuart Mill|John Stuart Mill]] as his disciple. (Quoted as saying "The spirit of dogmatic theology poisons anything it touches". ~ [http://www.positiveatheism.org/hist/quotes/quote-b0.htm].)<br />
* [[wikipedia:Albert Camus|Albert Camus]] (1913–1960): French philosopher and novelist, a luminary of existentialism.<br />
* [[wikipedia:Auguste Comte|Auguste Comte]] (1798–1857): French philosopher, considered the father of sociology. (Quoted as saying "The heavens declare the glory of Kepler and Newton". ~ [http://www.positiveatheism.org/hist/quotes/quote-c3.htm].)<br />
* [[wikipedia:André Comte-Sponville|André Comte-Sponville]] (1952–): French materialist philosopher.<br />
* [[wikipedia:Baron d'Holbach|Paul Henry Thiry, Baron d'Holbach]] (1723–1789): French homme de lettres, philosopher and encyclopedist, member of the philosophical movement of French materialism, attacked Christianity and religion as counter to the moral advancement of humanity.<br />
* [[wikipedia:Marquis de Condorcet|Marquis de Condorcet]] (1743–1794): French philosopher and mathematician of the Enlightenment.<br />
* [[wikipedia:Daniel Dennett|Daniel Dennett]] (1942–): American philosopher, leading figure in evolutionary biology and cognitive science, well-known for his book ''[[wikipedia:Darwin's Dangerous Idea|Darwin's Dangerous Idea]]''.<br />
* [[wikipedia:Denis Diderot|Denis Diderot]] (1713–1784): French philosopher, author, editor of the first encyclopedia. Known for the quote "Man will never be free until the last king is strangled with the entrails of the last priest".<br />
* [[wikipedia:Ludwig Andreas Feuerbach|Ludwig Andreas Feuerbach]] (1804–1872): German philosopher, postulated that God is merely a projection by humans of their own best qualities.<br />
* [[wikipedia:Paul Kurtz|Paul Kurtz]] (1926–): American philosopher, skeptic, founder of Committee for the Scientific Investigation of Claims of the Paranormal (CSICOP) and the Council for Secular Humanism.<br />
* [[wikipedia:Karl Popper|Sir Karl Popper]] (1902–1994): Austrian-born British philosopher of science, who claimed that empirical falsifiability should be the criterion for distinguishing scientific theory from non-science.<br />
* [[wikipedia:Richard Rorty|Richard Rorty]] (1931–): American philosopher, whose ideas combine pragmatism with a [[wikipedia:Ludwig Wittgenstein|Wittgensteinian]] ontology that declares that meaning is a social-linguistic product of dialogue. He actually rejects the theist/atheist dichotomy and prefers to call himself "anti-clerical".<br />
* [[wikipedia:Bertrand Russell|Bertrand Russell, 3rd Earl Russell]], (1872–1970): British mathematician, philosopher, logician, political liberal, activist, popularizer of philosophy, and 1950 Nobel Laureate in Literature. On the issue of atheism/agnosticism, he wrote the essay "[[wikipedia:Why I Am Not a Christian|Why I Am Not a Christian]]".<br />
* [[wikipedia:Jean-Paul Sartre|Jean-Paul Sartre]] (1905–1980): French existentialist philosopher, dramatist, novelist and critic.<br />
* [[wikipedia:Peter Singer|Peter Singer]] (1946–): Australian philosopher and teacher, working on practical ethics from a utilitarian perspective, controversial for his opinions on abortion and euthanasia.<br />
* [[wikipedia:James Lovelock|James Lovelock]] (1919-) [[wikiquote:James Lovelock]]<br />
<br />
==External links==<br />
*[http://www.gutenberg.org/browse/scores/top Top 100 - Project Gutenberg]<br />
*[http://www.randomhouse.com/modernlibrary/100talkingpoints.html The Modern Library - 100 Best - Talking Points]<br />
*[http://www.randomhouse.com/modernlibrary/100bestnonfiction.html The Modern Library - 100 Best - Nonfiction]<br />
*[http://www.randomhouse.com/modernlibrary/100bestnovels.html The Modern Library - 100 Best - Novels]<br />
*[http://www.nytimes.com/pages/books/bestseller/ NY Times Best-Seller Lists]<br />
*[http://www.bookmooch.com/ BookMooch] &mdash; a free book trade and exchange community<br />
*[http://www.bookcrossing.com/ BookCrossing] &mdash; a free book club<br />
*[http://www.nndb.com/ Notable Names Database] (NNDB) &mdash; an online database of biographical details of notable people.<br />
*[http://wikisummaries.org/Main_Page WikiSummaries] &mdash; provides free book summaries<br />
*[http://www.fullbooks.com/ fullbooks.com]<br />
*[http://www.themodernword.com/eco/eco_writings.html Umberto Eco: His Own Writings]<br />
*[http://www.ulib.org/ UDL: Universal Digital Library] &mdash; has over 1.5 million books digitised.<br />
*[[wikipedia:List of historical novels]]<br />
<br />
{{stub}}</div>Christophhttp://wiki.christophchamp.com/index.php?title=Vi&diff=8276Vi2023-06-21T22:38:41Z<p>Christoph: /* Configuring vi/vim */</p>
<hr />
<div>{{lowercase|title=vi}}<br />
<br />
'''vi''' is a screen-oriented text editor computer program run from the [[:Category:Linux Command Line Tools|command line]]. This article will mainly discuss techniques for '''vim''' ('''vi''' improved).<br />
<br />
==Commands==<br />
Below is a ''very short'' list of the most useful commands:<br />
<br />
<table><br />
<tr><td class="page" valign="top"><br />
<br />
<table class="tutorial"><br />
<br />
<tr class="title"><td colspan="2">'''Command mode'''</td></tr><br />
<tr class="data" ><td class="code">ESC</td> <td></td></tr><br />
<br />
<tr class="title"><td colspan="2">'''Movement command'''</td></tr><br />
<tr class="data" ><td class="code">h, j, k, l</td> <td>left, down, up, right</td></tr><br />
<tr class="data" ><td class="code">w, W, b, B</td> <td>forward, backward by word</td></tr><br />
<tr class="data" ><td class="code">H</td> <td>top of the screen</td></tr><br />
<tr class="data" ><td class="code">M</td> <td>middle of the screen</td></tr><br />
<tr class="data" ><td class="code">L</td> <td>last line of the screen</td></tr><br />
<tr class="data" ><td class="code">Ctrl-F</td> <td>forward one screen</td></tr><br />
<tr class="data" ><td class="code">Ctrl-B</td> <td>backward one screen</td></tr><br />
<tr class="data" ><td class="code">Ctrl-D</td> <td>forward half screen</td></tr><br />
<tr class="data" ><td class="code">Ctrl-U</td> <td>backward half screen</td></tr><br />
<tr class="data" ><td class="code">0 (zero), $</td> <td>start, end of current line</td><br />
</tr><br />
<br />
<tr class="title"><td colspan="2">'''Inserting text'''</td></tr><br />
<tr class="data" ><td class="code">a</td> <td>append after cursor</td></tr><br />
<tr class="data" ><td class="code">i</td> <td>insert before cursor</td></tr><br />
<tr class="data" ><td class="code">A</td> <td>append to end of line</td></tr><br />
<tr class="data" ><td class="code">I</td> <td>insert at start of line</td></tr><br />
<tr class="data" ><td class="code">o</td> <td>open a line below current line</td></tr><br />
<tr class="data" ><td class="code">O</td> <td>open a line above current line</td></tr><br />
<tr class="data" ><td class="code">r</td> <td>replace char</td></tr><br />
<br />
<tr class="title"><td colspan="2">'''Delete text'''</td></tr><br />
<tr class="data" ><td class="code">x</td> <td>current character</td></tr><br />
<tr class="data" ><td class="code">dh</td> <td>previous character</td></tr><br />
<tr class="data" ><td class="code">dw</td> <td>current word</td></tr><br />
<tr class="data" ><td class="code">db</td> <td>previous word</td></tr><br />
<tr class="data" ><td class="code">dd</td> <td>entire line</td></tr><br />
<tr class="data" ><td class="code">d$</td> <td>to end of line</td></tr><br />
<tr class="data" ><td class="code">d0 (zero)</td> <td>to start of line</td></tr><br />
<tr class="data" ><td class="code"><i>n</i>dd</td> <td>next n lines</td></tr><br />
<br />
<tr class="title"><td colspan="2">'''Undelete'''</td></tr><br />
<tr class="data" ><td class="code">p</td> <td>insert after cursor</td></tr><br />
<tr class="data" ><td class="code">P</td> <td>insert before cursor</td></tr><br />
<br />
</table><br />
</td><br />
<td class="page" valign="top"><br />
<br />
<table class="tutorial"><br />
<br />
<tr class="title"><td colspan="2">'''Goto line'''</td></tr><br />
<tr class="data" ><td class="code">:<i>linenumber</i></td> <td>&nbsp;</td></tr><br />
<tr class="data" ><td class="code"><i>n</i>G</td> <td>Goto line n</td></tr><br />
<tr class="data" ><td class="code">:7</td> <td>Goto line 7</td></tr><br />
<br />
<tr class="title"><td colspan="2">'''Save and exit'''</td></tr><br />
<tr class="data" ><td class="code">ZZ</td> <td>write if changes and quit</td></tr><br />
<tr class="data" ><td class="code">:wq</td> <td>write and quit</td></tr><br />
<tr class="data" ><td class="code">:w filename</td> <td>save to new file</td></tr><br />
<tr class="data" ><td class="code">:q!</td> <td>quit vi</td></tr><br />
<br />
<tr class="title"><td colspan="2">'''Search'''</td></tr><br />
<tr class="data" ><td class="code">/pattern &lt;RETURN&gt;</td> <td>forward for a pattern</td></tr><br />
<tr class="data" ><td class="code">?pattern &lt;RETURN&gt;</td> <td>backward for a pattern</td></tr><br />
<tr class="data" ><td class="code">n</td> <td>repeat previous search</td></tr><br />
<tr class="data" ><td class="code">N</td> <td>repeat previous search in reverse direction</td></tr><br />
<br />
<tr class="title"><td colspan="2">'''Search and replace'''</td></tr><br />
<tr class="data" ><td class="code">Example:</td> <td></td></tr><br />
<tr class="data" ><td colspan="2"><br />
<ul><br />
<li>Search from current line and replace first occurance<br /><br />
<span class="code">:s/search_string/replace_string/</span></li><br />
<li>Search from current line and replace all matches<br /><br />
<span class="code">:s/search_string/replace_string/g</span></li><br />
<li>Search from every line, replace confirmation (with [y]es)<br /><br />
<span class="code">:%s/search_string/replace_string/gc</span><br /><br />
<span class="code">:1,$s/search_string/replace_string/gc</span></li><br />
<li>Search lines from 10 to 20<br /><br />
<span class="code">:10,20s/search_string/replace_string/g</span></li><br />
<br />
</ul><br />
</td></tr><br />
<br />
<tr class="title"><td colspan="2">'''Undo'''</td></tr><br />
<tr class="data" ><td class="code">u</td> <td>the latest change</td></tr><br />
<tr class="data" ><td class="code">U</td> <td>all changes on a line</td></tr><br />
<br />
<tr class="title"><td colspan="2">'''Concatenate'''</td></tr><br />
<tr class="data" ><td class="code">J</td> <td>concatenate two lines</td></tr><br />
</table><br />
</td></tr><br />
</table><br />
<br />
===Moving around===<br />
''Note: <code>[count]</code> means optionally type a number first.''<br />
*<code>[count]</code> sentences backward:<br />
(<br />
*<code>[count]</code> sentences forward:<br />
)<br />
*<code>[count]</code> paragraphs backward:<br />
{<br />
*<code>[count]</code> paragraphs forward:<br />
}<br />
*<code>[count]</code> sections forward or to the next '{' in the first column. When used after an operator, then the '}' in the first column:<br />
]]<br />
*<code>[count]</code> sections forward or to the next '}' in the first column:<br />
][<br />
*<code>[count]</code> sections backward or to the previous '{' in the first column:<br />
[[<br />
*<code>[count]</code> sections backward or to the previous '}' in the first column:<br />
[]<br />
<br />
===Marks===<br />
*Set mark <code>{a-zA-Z}</code> at cursor position (does not move the cursor, this is not a motion command):<br />
m{a-zA-Z}<br />
*Set the previous context mark. This can be jumped to with the " <code>''</code> " or "<code>``</code>" command (does not move the cursor, this is not a motion command):<br />
m'<br />
#~OR~<br />
m`<br />
*Set mark <code>{a-zA-Z}</code> at last line number in <code>[range]</code>, column 0. Default is cursor line:<br />
:[range]ma[rk] {a-zA-Z}<br />
*Same as <code>:mark</code>, but the space before the mark name can be omitted:<br />
:[range]k{a-zA-Z}<br />
*To the first non-blank character on the line with mark <code>{a-z}</code> (line-wise):<br />
'{a-z}<br />
*To the first non-blank character on the line with mark <code>{A-Z0-9}</code> in the correct file:<br />
'{A-Z0-9}<br />
*To the mark <code>{a-z}</code>:<br />
`{a-z}<br />
*To the mark <code>{A-Z0-9}</code> in the correct file:<br />
`{A-Z0-9}<br />
*List all the current marks (not a motion command):<br />
:marks<br />
*List the marks that are mentioned in <code>{arg}</code> (not a motion command):<br />
:marks {arg}<br />
<br />
==Examples==<br />
*Search and delete lines: This will search for any line starting with "vi" followed by anything and delete that line.<br />
:%g/^vi.*/d<br />
<br />
*Delete all blank lines (with zero spaces):<br />
:g/^$/d<br />
<br />
*Delete all blank lines that may include spaces:<br />
:g/^ *$/d<br />
<br />
*Filter command: Uses multiple search criteria. This example deletes all lines starting with "ll" or "cd".<br />
:% ! egrep -v "(^ll|^cd)"<br />
<br />
*Find non-ASCII characters:<br />
/[\x80-\xff]<br />
<br />
*[https://en.wikipedia.org/wiki/ROT13 ROT13] text:<br />
ggVGg?<br />
<br />
* Edit remote files via scp (or ftp) in vim:<br />
$ vim scp://remote_user@remote_host:remote_port//path/to/file<br />
$ vim ftp://[user@]machine[[:#]portnumber]/path<br />
<br />
* Wrap existing text at 80 characters:<br />
v # select the lines you wish to wrap<br />
gq<br />
:h gq # for more information<br />
<br />
* Convert all words in a file to lowercase:<br />
ggVGu<br />
# gg - goes to first line of text<br />
# V - turns on Visual selection, in line mode<br />
# G - goes to end of file (at the moment you have whole text selected)<br />
# u - lowercase selected area<br />
#~OR~<br />
:%s/.*/\L&/g # lowercase<br />
:%s/[A-Z]/\L&/g # lowercase<br />
:%s/[a-z]/\U&/g # uppercase<br />
:0,$!tr "A-Z" "a-z" # lowercase<br />
<br />
* Substitute for <code>`dos2unix`</code>:<br />
$ vim +'set ff=unix' +wq file.txt<br />
<br />
* Remove ''only'' control characters/symbols:<br />
<nowiki>:%s/[[:cntrl:]]//g</nowiki><br />
<br />
* Remove non-printable characters (note that in versions prior to ~8.1.1 this ''also'' removes non-ASCII characters):<br />
<nowiki>:%s/[^[:print:]]//g</nowiki><br />
<br />
==External commands==<br />
Ctrl-Z # pause<br />
fg # resume<br />
===Bang!===<br />
* find out how many words are in the currently opened file (from the last save):<br />
:! wc %<br />
* check the [[PHP]] syntax of the currently opened (php) file:<br />
:! php5 -l %<br />
* save files with root privileges from within vim:<br />
:w ! sudo tee %<br />
<br />
===Reading command output===<br />
* to include a list of files from a specific directory, try this read command:<br />
:r ! ls -1 /home/user/path/etc<br />
<br />
* grab a page, using lynx, and dump it right into your editing session without leaving vim:<br />
:r ! lynx <nowiki>http://en.wikipedia.org/wiki/Vi</nowiki> -dump <br />
<br />
* etc:<br />
:r ! ls -1 /home/user/directory | sort -r<br />
:r ! grep string /var/log/apache2/site-error.log<br />
:set shell ? # check the current shell<br />
<br />
===Appending data===<br />
* append current line (under cursor) to external file:<br />
:.w! >> /for/bar<br />
<br />
===Pre-commands===<br />
vi +10 foo<br />
will open the file <code>foo</code> and position the cursor at line 10. Another common usage is to specify a pattern:<br />
vi +/END foo<br />
This will open the file <code>foo</code> and position the cursor at the first occurrence of the pattern 'END'.<br />
<br />
===Edit binary file in hex format===<br />
cp /bin/true /home/bob/mytrue<br />
vi mytrue<br />
# while in vi, [Esc] and<br />
:%!xxd<br />
# now in hex mode, change hex values and then revert back to binary<br />
:%!xxd -r<br />
:wq!<br />
You can also use <tt>xxd</tt> to transform the file first, edit it, then revert/transform back to binary<br />
xxd mytrue >mytrue.hex<br />
# edit, then<br />
xxd -r mytrue.hex >mytrue.bin<br />
<br />
==[[Regular expression]]s==<br />
===Switching cases===<br />
<br />
In the replacement part of a substitution command, i.e. between the second "/" and third "/",<br />
<br />
\u means make the following character upper case<br />
\l means make the following character lower case<br />
\U means make the rest of the replacement upper case<br />
\L means make the rest of the replacement lower case<br />
<br />
* Make the first letter of every word from line 18 to 43 uppercase:<br />
<br />
:18,43s/\<./\u&/g<br />
<br />
* Change "uPPeR" and "LoweR" in any mixture of cases to lowercase:<br />
<br />
:s/[UuLl][PpOo][PpWw][Ee][Rr]/\L&/<br />
<br />
* Make the whole file uppercase:<br />
<br />
:%s/.*/\U&/<br />
<br />
* Make the region from line m to line n all uppercase:<br />
<br />
:'m,'ns/.*/\U&/<br />
<br />
* Make a paragraph all lowercase:<br />
<br />
:?^$?,/^$/s/.*/\L&/<br />
<br />
* Make the first letter of every word in a paragraph uppercase:<br />
<br />
:?^$?,/^$/s/\([^ ][^ ]*\)/\u&/g<br />
<br />
* Make the second word of each line uppercase:<br />
<br />
:1,$s/^\([^ ]*\) \([^ ]*\) \(.*\)/\1 \U\2\e \3/<br />
<br />
===Misc===<br />
Change all lines containing something like the following:<br />
foo - bar<br />
To:<br />
;foo : bar<br />
<br />
%s/^\([a-z].*[a-z]\)[ \t]*- /;\1 : /<br />
<br />
* Convert curly quotes to straight ones:<br />
:%s/[“”]/"/g<br />
<br />
==Abbreviations==<br />
If you are like me and use <tt>vi</tt> for ''everything'', you will find that there are certain words, phrases, bits of code, etc. that you are constantly using. These can all be stored as abbreviations.<br />
<br />
The syntax for storing an abbreviation is as follows:<br />
esc: # to go into command mode<br />
ab abbr phrase<br />
<br />
For an example, say you were editing xhtml, and you wanted '<code>xzml</code>' to enter standard xml header tags of <code><?xml version="1.0" encoding="UTF-8" ?></code>, you could enter:<br />
:ab xzml <?xml version="1.0" encoding="UTF-8" ?><br />
<br />
Then, any time you enter '<code>xzml</code>' as a word, <tt>vi</tt> automatically replaces '<code>xzml</code>' with your standard table tags. The idea is to choose an abbreviation that has no chance of being a real "word", but is short and easy to remember.<br />
<br />
Note that the above abbreviation is stored for your current session only. If you wish for it to be universally applicable and available, put the abbreviation command line in your <code>.vimrc</code> file.<br />
<br />
To list all of your abbreviations, enter<br />
:ab<br />
<br />
==Using tabs==<br />
vi -p foo1 foo2 foo3<br />
<br />
Or, without actual "tabs" but still opening multiple files:<br />
vi foo1 foo2 foo3<br />
<br />
Once vi has been started with a list of files it is possible to navigate within the list with the following commands:<br />
;<nowiki>:</nowiki>n : next - Move to the next file in the file list<br />
;<nowiki>:</nowiki>rew : rewind - rewind the file list and open the first file in the list.<br />
<br />
==Configuring vi/vim==<br />
It is possible to extensively configure <code>vi</code> (or <code>vim</code>) to suit your personal needs.<br />
<br />
For an example, if you would like syntax-highlighting to be used by default for most of your code files, edit your <code>.vimrc</code> file (located in your "home" directory. If one does not exist, create it) and add the following line:<br />
<br />
syn on<br />
<br />
Below are some other useful additions:<br />
:set all # view all current settings<br />
:set nowrapscan # do not wrap around file when looking for a string<br />
:set number # add line numbers to left of file<br />
:set noautoindent # do not automatically indent the file<br />
:set report=0 # always report at bottom when any number of lines are yanked<br />
:set ignorecase # treat capital and small letters the same when searching<br />
:set nobackup # do _not_ create the ~ files<br />
:set nowritebackup<br />
<br />
Control of tabs:<br />
set expandtab<br />
set ts=4 # tab stop<br />
set sw=4 # shift width<br />
<br />
* See line breaks and carriage returns in the editor:<br />
<pre><br />
cat >> ~/.vimrc <<EOF<br />
set ffs=unix<br />
set encoding=utf-8<br />
set fileencoding=utf-8<br />
set listchars=eol:¶<br />
set list<br />
EOF<br />
</pre><br />
<br />
Result:<br />
My line with CRLF EOL here ^M¶<br />
<br />
==External links==<br />
*[http://www.vim.org/ Official website]<br />
*[http://vimdoc.sourceforge.net/ Vimdoc] &mdash; the online source for Vim documentation<br />
*[http://www.geocities.com/volontir/ Vim Regular Expresions]<br />
*[http://www.vmunix.com/~gabor/vi.lynx.html Vi Helpfile] &mdash; by Gábor Egressy<br />
*[http://www.oualline.com/vim-cook.html Vim Cookbook] &mdash; by Steve Oualline<br />
*[http://vim.runpaint.org/ Vim Recipes]<br />
*[http://www.fprintf.net/vimCheatSheet.html Vim Commands Cheat Sheet]<br />
*[http://en.wikibooks.org/wiki/Learning_the_vi_editor Wikibooks : Learning the vi editor]<br />
*[http://vim.wikia.com/wiki/Vim_Tips_Wiki vim tips wiki]<br />
*[http://www.openvim.com/tutorial.html Interactive Vim Tutorial]<br />
<br />
[[Category:Linux Command Line Tools]]<br />
[[Category:Technical and Specialized Skills]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Travel_Log&diff=8275Category:Travel Log2023-05-17T22:42:11Z<p>Christoph: /* Miscellaneous (North America) */</p>
<hr />
<div>This category will be my, as yet, unorganised '''Travel Log''' to many places around the world. (Note: The following is very much an ''incomplete'' travel log.)<br />
<br />
== Auto ==<br />
<br />
===Berlin trip (2006)===<br />
* Monaco &rarr; Milano &rarr; Ljubljana &rarr; Rotterdam &rarr; Berlin &rarr; Copenhagen &rarr; Monaco: April 2006<br />
: [http://triptracker.net/trip/1165/ TripTracker]<br />
: 1-Apr-2006 (14h20): Monaco &rarr; Milano<br />
: 2-Apr-2006 (23h30): Milano &rarr; Ljubljana<br />
: 3-Apr-2006 &ndash; 5-Apr-2006: Slovenia (Ljubljana, Novo Mesto, Kranj, Postojna, Jesenice, etc.)<br />
: 5-Apr-2006 (12h30): |&larr; Austria (Villach)<br />
: 5-Apr-2006 (15h15): |&larr; Germany<br />
: 5-Apr-2006 (19h15): Stuttgart<br />
: 5-Apr-2006 (20h20): Karlsruhe<br />
: 5-Apr-2006 (23h30): Köln<br />
: 5-Apr-2006 (00h10): |&larr; The Netherlands<br />
: 5-Apr-2006 (02h00): Rotterdam<br />
: 7-Apr-2006 (12h00): |&rarr; Rotterdam<br />
: 7-Apr-2006 (14h45): |&larr; Germany<br />
: 7-Apr-2006 (17h00): Hannover<br />
: 7-Apr-2006 (18h30): Magdeburg<br />
: 7-Apr-2006 (20h00): Berlin<br />
: 8-Apr-2006 (15h30): |&rarr; Berlin<br />
: 8-Apr-2006 (18h00): Rostock<br />
: 8-Apr-2006 (19h30): Ferry (|&rarr; Germany from Rostock Harb.)<br />
: 8-Apr-2006 (21h15): Ferry (|&larr; Denmark at Gedsen)<br />
: 8-Apr-2006 (23h20): København<br />
: 9-Apr-2006 (06h30): |&rarr; København<br />
: 9-Apr-2006 (09h00): Ferry (|&rarr; Denmark from Gedsen)<br />
: 9-Apr-2006 (11h00): Ferry (|&larr; Germany at Rostock Harb.)<br />
: 9-Apr-2006 (13h30): |&larr; Berlin<br />
: 9-Apr-2006 (14h00): |&rarr; Berlin<br />
: 9-Apr-2006 (15h50): Dresden<br />
:10-Apr-2006 (00h45): |&larr; Slovenia<br />
:10-Apr-2006 (01h40): Ljubljana<br />
:10-Apr-2006 (02h40): Postojna<br />
:10-Apr-2006 (13h15): |&larr; Italy<br />
:10-Apr-2006 (15h00): Padova<br />
:10-Apr-2006 (15h40): Verona<br />
:10-Apr-2006 (18h50): Genova<br />
:10-Apr-2006 (20h35): |&larr; France<br />
:10-Apr-2006 (20h45): |&larr; Monaco<br />
<br />
===Canada trip (2001)===<br />
''Note: The total trip covered 11,893 km (7,390 miles).''<br />
*Corvallis, OR &rarr; Boston, MA &rarr; Quebec &rarr; Ontario &rarr; Manitoba &rarr; Saskatchewan &rarr; Alberta &rarr; British Columbia &rarr; Corvallis, OR<br />
** 01-Sep-2001 (??h??): |&rarr; Corvallis, OR<br />
** 06-Sep-2001 (15h45): |&larr; Massachusetts<br />
** 13-Sep-2001 (13h15): |&rarr; Westborough, MA<br />
** 13-Sep-2001 (17h46): Augusta, ME<br />
** 13-Sep-2001 (18h15): |&larr; CANADA (into Quebec)<br />
** 14-Sep-2001 (02h06): Grande Allee Est., Quebec<br />
** 14-Sep-2001 (15h01): Cap-Madeleine, PQ<br />
** 15-Sep-2001 (17h44): Thunder Bay, ON<br />
** 14-Sep-2001 (17h45): |&larr; Ontario<br />
** 14-Sep-2001 (20h03): Cobden, ON<br />
** 15-Sep-2001 (12h02): Sudbury, ON<br />
** 15-Sep-2001 (10h25): Wawa, ON<br />
** 15-Sep-2001 (22h01): Kenora, ON<br />
** 15-Sep-2001 (10h37): |&larr; Manitoba<br />
** 16-Sep-2001 (10h53): Brandon, MB<br />
** 16-Sep-2001 (12h50): |&larr; Saskatchewan<br />
** 16-Sep-2001 (16h09): Herbert, SK<br />
** 16-Sep-2001 (18h06): |&larr; Alberta<br />
** 16-Sep-2001 (23h00): |&larr; British Columbia<br />
** 17-Sep-2001 (00h30): |&larr; USA (into Idaho)<br />
** 17-Sep-2001 (03h36): Coeur d'Alene, ID<br />
** 17-Sep-2001 (05h30): |&larr; Oregon<br />
<br />
===Ireland trip (1999-2000)===<br />
* 26-Dec-1999 (??h??): Dublin, Ireland<br />
* 26-Dec-1999 (16h13): Lord Edward St., Dublin<br />
* 27-Dec-1999 (??h??): Kinlay House, Christchurch, 2-12 Lord Edward St., Dublin, Ireland<br />
* 2?-Dec-1999 (??h??): Kilkenny<br />
* 28-Dec-1999 (12h27): Patrick St., Cork<br />
* 28-Dec-1999 (17h12): Mallow, Co. Cork<br />
* 29-Dec-1999 (??h??): Co. Kerry<br />
* ??-Dec-1999 (??h??): Saratoga House (Bed & Breakfast), Muckross Road, Killarney, Ireland<br />
* 29-Dec-1999 (15h09): Chapel St., Limerick<br />
* 29-Dec-1999 (15h18): Eimear<br />
* 30-Dec-1999 (??h??): Ballybofey<br />
* 30-Dec-1999 (15h51): Greysteel<br />
* 30-Dec-1999 (??h??): O'Connell St., Sligo<br />
* 30-Dec-1999 (??h??): Petra, Galway<br />
* 30-Dec-1999 (??h??): Sligo<br />
* 30-Dec-1999 (??h??): The Linen House Backpackers Hostel, 18-20 Kent Street, Belfast, Ireland<br />
* 01-Jan-2000 (14h46): Arthur Sq., Belfast<br />
* 02-Jan-2000 (06h34): Dublin Airport<br />
<br />
===Miscellaneous (Europe)===<br />
* Budapest, Hungary &rarr; Dubrovnik, Croatia: June/July 2018 (round-trip)<br />
* ''The Cliffs of Møn'', DK: Oct-2005<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Chiemsee, Germany: Oct-1996 (round-trip)<br />
* Zagreb, Croatia &rarr; Ljubjlana, Slovenia &rarr; Graz, Austria &rarr; Budapest, Hungary: Sep-1996<br />
* Zagreb, Croatia &rarr; Ljubljana, Slovenia: Sep-1996 (round-trip)<br />
* Budapest, Hungary &rarr; Zagreb, Croatia: Sep-1996<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Berchtesgaden, Germany &rarr; Innsbruck, Austria &rarr; Liechtenstein &rarr; Switzerland: Aug-1996 (round-trip)<br />
* Warsaw, Poland &rarr; Budapest, Hungary: September 1994<br />
* Budapest, Hungary &rarr; Slovakia (11-Nov-1993) &rarr; Warsaw, Poland: November 1993<br />
* Vienna, Austria &rarr; Budapest, Hungary: 28-Sep-1993<br />
<br />
===Miscellaneous (South America)===<br />
* Cuenca, Ecuador &rarr; Riobamba, Ecuador &rarr; Ambato, Ecuador &rarr; Quito, Ecuador: 1993 (round-trip)<br />
* Quito, Ecuador &#187; Ipiales, Colombia: 1993 (round-trip)<br />
* Guayaquil, Ecuador &rarr; Santo Domingo de Los Colorados, Ecuador &rarr; Quito, Ecuador: 1993<br />
* Guayaquil, Ecuador &rarr; Salinas, Ecuador: 1993 (round-trip)<br />
* Tumbes, Peru &rarr; Guayaquil, Ecuador: 21-Dec-1992<br />
<br />
===Miscellaneous (North America)===<br />
* Seattle, WA &#187; Cle Elum, WA &#187; Chelan, WA &#187; Republic, WA &#187; Leavenworth, WA &#187; Monroe, WA &#187; Seattle, WA: April 2023 (933 km/580 mi)<br />
* Seattle, WA &#187; Winthrop, WA &#187; Leavenworth, WA &#187; Issaquah, WA &#187; Seattle, WA: June 2022<br />
* Seattle, WA &#187; Winthrop, WA &#187; Tiger, WA &#187; Spokane, WA &#187; Seattle, WA: May 2022 (1,200 km/744 mi)<br />
* Seattle, WA &#187; Portland, OR &#187; Grants Pass, OR &#187; Crescent City, CA &#187; Redwood National Forest &#187; Newport, OR &#187; Astoria, OR &#187; Elma, WA &#187; Seattle, WA: November 2021 (1,881 km/1,169 mi)<br />
* Seattle, WA &#187; Mt Saint Helens &#187; Mt Adams &#187; Stonehenge Memorial &#187; Multnomah Falls &#187; Seattle, WA: September 2021 (914 km/568 mi)<br />
* Seattle, WA &#187; Walla Walla, OR &#187; Joseph, OR &#187; Lewiston, ID &#187; Grand Coulee, WA &#187; Seattle, WA: June 2021 (1,421 km/883 mi)<br />
* Seattle, WA &#187; Pendleton, OR &#187; Craters of the Moon National Monument & Preserve &#187; Idaho Springs, ID &#187; Jackson, WY &#187; Grand Teton National Park &#187; Yellowstone National Park &#187; Missoula, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: September 2020 (2,746 km/1,706 mi)<br />
* Seattle, WA &#187; Coeur d'Alene, ID &#187; Missoula, MT &#187; Glacier National Park, MT &#187; Seattle, WA: July 2019 (1,984 km/1,233 mi)<br />
* Seattle, WA &#187; Corvallis, OR: November 2018 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2017 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2016 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2015 (round-trip)<br />
* Texas &#187; Oklahoma &#187; Kansas &#187; Nebraska &#187; South Dakota &#187; Wyoming &#187; Montana &#187; Idaho &#187; Seattle, WA: September 2015 (4,000 km/4,290 mi)<br />
* Seattle, WA &#187; Oregon &#187; Idaho &#187; Utah &#187; Wyoming &#187; Colorado &#187; Kansas &#187; Oklahoma &#187; Texas: 11-16 May 2013<br />
* Seattle, WA &#187; Port Angeles, WA &#187; Hurricane Ridge, WA: 28-Dec-2012 (round-trip)<br />
* Seattle, WA &#187; Portland, OR: 4-Dec-2012 (round-trip)<br />
* Chicago, IL &#187; Milwaukee, WI &#187; Minneapolis, MN &#187; Fargo, ND &#187; Billings, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: 25-26 June 2012 (3,357 km/2,086 mi)<br />
* St. Louis, MO &#187; Chicago, IL: 31-Dec-2011<br />
* Chicago, IL &#187; St. Louis, MO: 5-Jul-2011<br />
* Milwaukee, WI &#187; Chicago, IL: 30-Jun-2011<br />
* Pittsburgh, PA &#187; New York City, NY: April 2005 (round-trip)<br />
* Pittsburgh, PA &#187; Bethlehem, PA &#187; Westborough, MA &#187; New York City, NY: December 2004 (round-trip)<br />
* Pittsburgh, PA &#187; Boston, MA: November 2004 (round-trip)<br />
* Corvallis, OR &#187; Salt Lake City, UT &#187; Houston, TX &#187; Atlanta, GA &#187; Pittsburgh, PA: September 2004<br />
* Corvallis, OR &#187; Boston, MA: 2001, 2002 (round-trip)<br />
* Corvallis, OR &#187; Vancouver, BC, Canada (round-trip)<br />
* Corvallis, OR &#187; Tijuana, Mexico: 7-Sep-1999 (round-trip)<br />
* Los Angeles, CA &#187; Corvallis, OR: January 1998<br />
* Houston, TX &#187; Milwaukee, WI &#187; Menominee, MI: May 1995 (round-trip)<br />
<br />
== Bus / Train / Ferry ==<br />
===Spain trip (2006)===<br />
* Monaco &#187; Cannes &#187; Marseille &#187; Montpellier St-Ro &#187; Barcelona; April 2006 (round-trip)<br />
** 24-Apr-06 18h35: |&rarr; Nice, France [SNCF train]<br />
** 24-Apr-06 19h00: Antibes, FR<br />
** 24-Apr-06 19h07: Cannes, FR<br />
** 24-Apr-06 19h30: B. sur-Mer, FR<br />
** 24-Apr-06 19h39: San Raphael-Valescure, FR<br />
** 24-Apr-06 20h14: Les Arcs-Drag., FR<br />
** 24-Apr-06 20h56: Toulon, FR<br />
** 24-Apr-06 21h35: Marseille, FR<br />
** 25-Apr-06 15h05: |&rarr; Marseille, FR<br />
** 25-Apr-06 16h16: Nîmes, FR<br />
** 25-Apr-06 17h21: Montpellier St-Ro, FR<br />
** 25-Apr-06 18h42: Béziers, FR<br />
** 25-Apr-06 19h35: Perpignan, FR<br />
** 25-Apr-06 20h15: Portbou, Spain (ES) [''border'']<br />
** 25-Apr-06 22h30: Barcelona, ES<br />
** 27-Apr-06 19h24: |&rarr; Barcelona, ES [Renfe train]<br />
** 27-Apr-06 22h05: Cerbere, FR [''border'']<br />
** 28-Apr-06 08h37: Nice, FR<br />
** 28-Apr-06 10h00: Monaco<br />
<br />
===Miscellaneous (Europe)===<br />
* Tallinn, Estonia &rarr; Helsinki, Finland: January 2020 (round-trip)<br />
* Lisbon, Portugal &rarr; Porto, Portugal: Nov-2016 (round-trip)<br />
* København, DK &#187; Berlin, D: 09-Apr-2006 [+Ferry]<br />
* Berlin, D &#187; København, DK: 08-Apr-2006 (15h15) [+Ferry]<br />
* Ljubljana, Slovenia &#187; Villach HBF, Austria: 18-Aug-1997<br />
* Stockholm C &#187; Oslo S: 15-Aug-1997 (SJ train)<br />
* Salzburg, Austria &#187; Ljubljana, Slovenia: 25-Aug-1997 (&#214;sterreichische Bundesbahnen train (&#214;BB))<br />
* Haslev, DK &#187; Næstved, DK: 24-Aug-1997 (DSB train)<br />
* København &#187; Stockholm C: 14-Aug-1997 (DSB train)<br />
* Oslo S &#187; Bergen: 16-Aug-1997<br />
* Næstved, DK &#187; Rødby Færge, DK: 24-Aug-1997<br />
* Salzburg HBF &#187; Villach HBF (&uuml;ber Schwarzach-St. veit Bad Gastein): 25-Aug-1997 (&#214;BB train)<br />
* Oslo S &#187; Trondheim: 18-Aug-1997<br />
* Grensen (Scandinavia): 16-Aug-1997<br />
* Abisko Turiststation - STF: 20-Aug-1997<br />
* Abisko Turiststation - STF: 21-Aug-1997<br />
* Germany: 24-Aug-1997 (DB train)<br />
* Stockholm S:T Eriksgatan: 15-Aug-1997<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Jun-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Mar-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: (28-Nov-1997/30-Nov-1997) (round-trip)<br />
* Budapest, Hungary &rarr; Ljubljana, Slovenia: 8-Nov-1996<br />
* Budapest, Hungary &rarr; Slovakia: 18-Aug-1995 (round-trip)<br />
* Budapest, Hungary &rarr; Vienna, Austria: 9-Feb-1995 (round-trip)<br />
* Moscow, Russia &rarr; Warsaw, Poland: Sep-1994<br />
* Moscow, Russia &rarr; Brest, Belarus: Aug-1994 (round-trip)<br />
* Moscow, Russia &rarr; Minsk, Belarus: Jul-1994 (round-trip)<br />
* Warsaw, Poland &#187; Moscow, Russia: Jun-1994<br />
* Warsaw, Poland &rarr; Vilnius, Lithuania &rarr; Riga, Latvia: (12-Jan-1994/??-Jan-1994) (round-trip)<br />
<br />
===Miscellaneous (South America)===<br />
* Arequipa, Peru &rarr; Lima, Peru: 1992<br />
* Arequipa, Peru &rarr; Iquique, Chile: (17-Jul-1992/20-Jul-1992) (round-trip)<br />
* Lima, Peru &rarr; Arequipa, Peru: 1992<br />
* Lima, Peru &rarr; La Paz, Bolivia: (19-May-1991/6-Jun-1991) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (29-Nov-1990/11-Dec-1990) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (6-Jul-1990/20-Jul-1990) (round-trip)<br />
<br />
==Flights==<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX): March 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): February 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2022 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): August 2022 [RT]<br />
* Kyiv, Ukraine (KBP) ✈ Frankfurt, Germany (FRA) ✈ Seattle, WA (SEA): December 2021<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ Frankfurt, Germany (FRA) ✈ Kyiv, Ukraine (KBP): December 2021<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2021 [RT]<br />
* Memphis, TN (MEM) ✈ Atlanta, GA (ATL) ✈ Seattle, WA (SEA): June 2021<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC) ✈ Memphis, TN (MEM): June 2021<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): May 2021 [RT]<br />
* Tallinn, Estonia (TLL) ✈ Stockholm, Sweden (ARN) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): January 2020<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ København, DK (CPH) ✈ Helsinki, Finland (HEL) ✈ Tallinn, Estonia (TLL): December 2019<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): October 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Miami, FL (MIA): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): August 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Denver, CO (DEN): May 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Charlotte, NC (CLT): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Santa Ana, CA (SNA): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): September 2018 [RT]<br />
* Budapest, Hungary (BUD) ✈ Brussels, Belgium (BRU) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): July 2018<br />
* Seattle, WA (SEA) ✈ Toronto, Canada (YYZ) ✈ Budapest, Hungary (BUD): June 2018<br />
* Seattle, WA (SEA) ✈ Reno, NV (RNO): May 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Reykjavík, Iceland (RKV): December 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Kona, Hawaii (KOA): September 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC): August 2017 [RT]<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): November 2016<br />
* Lisbon, Portugal ✈ Amsterdam, NL (AMS): November 2016<br />
* Paris, FR (CGD) ✈ Lisbon, Portugal: November 2016<br />
* Seattle, WA (SEA) ✈ Paris, FR (CDG): November 2016<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): November 2016 [RT]<br />
* Seattle, WA (SEA) ✈ Las Vegas, NV (LAS): June 2016 [RT]<br />
* Houston, TX (IAH) ✈ Seattle, WA (SEA): September 2015 [RT]<br />
* Houston, TX (IAH) ✈ San Francisco, CA (SFO): August 2015 [RT]<br />
* Houston, TX (IAH) ✈ Madison, WI (MSN): March 2015 [RT]<br />
* Houston, TX (IAH) ✈ Amsterdam, NL (AMS): March 2015 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee (MKE): June 2011<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX) ✈ Chicago, IL (ORD): October 2010 [RT]<br />
* Seattle, WA (SEA) ✈ Los Angeles, CA (LAX): December 2007 [RT]<br />
* København, DK (CPH) ✈ Seattle, WA (SEA): June 2006<br />
* Heathrow, UK ✈ København, DK (CPH): June 2006<br />
* Nice, FR ✈ Heathrow, UK: June 2006<br />
* København, DK (CPH) ✈ Nice, FR (NCE): February 2006<br />
* Washington Dulles ✈ København, DK: August 2005<br />
* Pittsburgh, PA (PIT) ✈ Washington Dulles: August 2005<br />
* Portland, OR (PDX) ✈ Pittsburgh, PA (PIT): Summer 2004 [RT]<br />
* Eugene, OR ✈ Houston, TX (IAH): February 2002 [RT]<br />
* Portland, OR (PDX) ✈ Boston, MA: December 2002 [RT]<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): January 2000<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): January 2000<br />
* Dublin, Ireland ✈ Amsterdam, NL (AMS): January 2000<br />
* Amsterdam (AMS) ✈ Dublin, Ireland: December 1999<br />
* Seattle, WA (SEA) ✈ Amsterdam, NL (AMS): December 1999<br />
* Portland, OR (PDX) ✈ Seattle, WA (SEA): December 1999<br />
* Chicago (ORD) ✈ Los Angeles (LAX): December 1997<br />
* Greenbay, WI (GRB) ✈ Chicago (ORD): December 1997<br />
* Chicago (ORD) ✈ Greenbay, WI (GRB): December 1997<br />
* Rome, Italy (FCO) ✈ Chicago, IL (ORD): December 1997<br />
* Trieste, Italy (TRS) ✈ Rome, Italy (FCO): December 1997<br />
* Houston, TX (IAH) ✈ Budapest, Hungary (BUD): July 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: June 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: March 1996 [RT]<br />
* Narita, Japan ✈ Taipei, Taiwan: December 1995 [RT]<br />
* Los Angeles, CA (LAX) ✈ Narita, Japan: October 1995<br />
* Houston, TX (IAH) ✈ Los Angeles (LAX): October 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): September 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): May 1995 [RT]<br />
* Paris, FR (CGD) ✈ Vienna, Austria: September 1993<br />
* Quito, Ecuador ✈ Caracas, Venezuela (CCS) ✈ Paris, France: 1993<br />
* Lima, Peru ✈ Tumbes, Peru: December 1992<br />
* Boston, MA ✈ Miami, FL ✈ Lima, Peru: <br />
* Amsterdam, NL (AMS) ✈ Chicago, IL (ORD): <br />
* Boston, MA ✈ Amsterdam, NL (AMS):<br />
<br />
== Individual Places ==<br />
=== Ireland ===<br />
* Dublin<br />
** '''Dublin''' (Baile &Ntilde;tha Cliath)<br />
* Kildare<br />
** Naas<br />
* Laois<br />
* Carlow<br />
** Carlow (Ceatharlach)<br />
** Royal Oak<br />
* Kilkenny<br />
** '''Kilkenny''' (Cill Chainnigh)<br />
** Callan<br />
* Tipperary<br />
** Glenbower<br />
** Clonmel (Cluian Meala)<br />
** Cahir<br />
** Burncourt<br />
* Cork<br />
** Fermoy<br />
** '''Cork''' (Coroaigh)<br />
** Fota<br />
** Cobh (An C&oacute;bh)<br />
** '''Blarney'''<br />
** Macroom<br />
** Ballyvourney<br />
* Kerr<br />
** ''Derrynasaggart Mts''<br />
** Poulgorm Br<br />
** '''Killarney''' (Cill Airne)<br />
** Farranfore<br />
* Limerick<br />
** Abbeyfeale<br />
** ''Mullaghareirk Mts''<br />
** Newcastle West<br />
** Croagh<br />
** '''Limerick''' (Luimneach)<br />
* Clare<br />
** Bunratty<br />
** Ennis (Inis)<br />
** Ennistymon<br />
** Liscannor<br />
** ''Cliffs of Moher''<br />
** Doolin<br />
** Lisdoonvarna<br />
** Ballyvaughan<br />
** Bealaclugga<br />
** Burren<br />
* Galway<br />
** Kinvarra<br />
** Ballinderreen<br />
** Oranmore<br />
** '''Galway''' (Gaillimh)<br />
** Claregalway<br />
** Tuam<br />
* Mayo<br />
** Claremorris<br />
** Cloonfallagh<br />
** Charlestown<br />
* Sligo<br />
** Curry<br />
** Tubbercurry<br />
** Collooney<br />
** '''Sligo''' (Sligeach)<br />
** ''Dartry Mts''<br />
* Leitrim<br />
* Donegal<br />
** Bundoran<br />
** Ballyshannon<br />
** Donegal (D&uacute;n na nGall)<br />
** Ballybofey<br />
** Clady<br />
* Tyrone<br />
** '''Strabane''' (Northern Ireland)<br />
* Londonderry<br />
** Derry (Londonderry)<br />
** Eglinton<br />
** Ballykelly<br />
** Limavady<br />
** Coleraine<br />
* Antrim<br />
** Derrykelghan<br />
** Moss-side<br />
** Ballycastle<br />
** ''Antrim Hills''<br />
** Ballintoy<br />
** ''Carrick-a-Rede Rope Bridge''<br />
** ''Giants Causeway''<br />
** Craignamaddy<br />
** Ballymoney<br />
** Ballymena<br />
** Antrim<br />
** ''Lough Neagh'' (lake)<br />
** Dunadry<br />
** Newtownabbey<br />
** '''Belfast'''<br />
* Down<br />
** Lisburn<br />
** Banbridge<br />
* Armagh<br />
** Newry<br />
* Louth<br />
** Dundalk (Dun Dealgan)<br />
** Dunleen<br />
** Drogheda (Droichead Atha)<br />
* Meath<br />
** Julianstown<br />
* Dublin<br />
** Balbriggan<br />
** Swords<br />
<br />
[[Category:World Travels]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Bash&diff=8274Bash2023-05-09T19:04:58Z<p>Christoph: /* Christoph's Additions */</p>
<hr />
<div>{{lowercase|title=bash}}<br />
<br />
'''Bash''' is a [[Linux]] command shell written for the GNU project. Its name is an acronym for '''''B'''ourne-'''a'''gain '''sh'''ell''&mdash;a pun on the Bourne shell (sh), which was an early, important Unix shell.<br />
<br />
Bash is the default shell on most Linux systems and was previously my favourite shell. After nearly 14 years of using the bash shell as my primary shell, I switched to [[zsh|Z shell]] (zsh). It is awesome and extremely powerful!<br />
<br />
see: [[Bash/scripts]] for examples<br />
<br />
== Bash builtins ==<br />
A shell builtin is a command or a function, called from a shell, that is executed directly in the shell itself, instead of an external executable program which the shell would load and execute.<br />
<br />
Shell builtins work significantly faster than external programs, because there is no program loading overhead. However, their code is inherently present in the shell, and thus modifying or updating them requires modifications to the shell. Therefore shell builtins are usually used for simple, almost trivial, functions, such as text output. Because of the nature of Linux, some functions of the operating system have to be implemented as shell builtins. The most notable example is cd, which changes the working directory of the shell. Because each executable program runs in a separate process, and working directories are specific to each process, loading cd as an external program would not affect the working directory of the shell that loaded it.<br />
<br />
bash, :, ., [, alias, bg, bind, break, builtin, cd, command, compgen,<br />
complete, continue, declare, dirs, disown, echo, enable, eval, exec,<br />
exit, export, fc, fg, getopts, hash, help, history, jobs, kill, let,<br />
local, logout, popd, printf, pushd, pwd, read, readonly, return, set,<br />
shift, shopt, source, suspend, test, times, trap, type, typeset,<br />
ulimit, umask, unalias, unset, wait<br />
<br />
== Bash shell shortcuts ==<br />
<br />
=== CTRL Key Bound ===<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="4" bgcolor="#EFEFEF" | '''Basic commands'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Command<br />
!Description<br />
|- align="left"<br />
|'''Ctrl + a''' || Jump to the start of the line<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + b''' || Move back a char<br />
|- align="left"<br />
|'''Ctrl + c''' || Terminate the command<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + d''' || Delete from under the cursor<br />
|- align="left"<br />
|'''Ctrl + e''' || Jump to the end of the line<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + f''' || Move forward a char<br />
|- align="left"<br />
|'''Ctrl + h''' || Backspace<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + k''' || Delete to EOL<br />
|- align="left"<br />
|'''Ctrl + l''' || Clear the screen<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + n''' || Next command line (useful for "scrolling" with Ctrl + p)<br />
|- align="left"<br />
|'''Ctrl + p''' || Previous command line<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + r''' || Search the history backwards<br />
|- align="left"<br />
|'''Ctrl + R''' || Search the history backwards with multi occurrence<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + u''' || Delete backward from cursor<br />
|- align="left"<br />
|'''Ctrl + xx''' || Move between EOL and current cursor position<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + x @''' || Show possible hostname completions<br />
|- align="left"<br />
|'''Ctrl + w''' || deletes the token left of the cursor<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + z''' || Suspend / Stop the command<br />
|- align="left"<br />
|'''Ctrl + /''' || Undo last command-line edit<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
=== ALT Key Bound ===<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="4" bgcolor="#EFEFEF" | '''Basic commands'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Command<br />
!Description<br />
|- align="left"<br />
|'''Alt + <''' || Move to the first line in the history<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + >''' || Move to the last line in the history<br />
|- align="left"<br />
|'''Alt + ?''' || Show current completion list<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + *''' || Insert all possible completions<br />
|- align="left"<br />
|'''Alt + /''' || Attempt to complete filename<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + .''' || Yank last argument to previous command<br />
|- align="left"<br />
|'''Alt + b''' || Move backward<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + c''' || Capitalize the word<br />
|- align="left"<br />
|'''Alt + d''' || Delete word<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + f''' || Move forward<br />
|- align="left"<br />
|'''Alt + l''' || Make word lowercase<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + n''' || Search the history forwards non-incremental<br />
|- align="left"<br />
|'''Alt + p''' || Search the history backwards non-incremental<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + r''' || Recall command<br />
|- align="left"<br />
|'''Alt + t''' || Move words around<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + u''' || Make word uppercase<br />
|- align="left"<br />
|'''Alt + back-space''' || Delete backward from cursor<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
=== Case Transformation ===<br />
<br />
; <code>Esc C</code> : Converts the character under the cursor to upper case.<br />
; <code>Esc U</code> : Converts the text from the cursor to the end of the word to uppercase.<br />
; <code>Esc L</code> : Converts the text from the cursor to the end of the word to lowercase.<br />
<br />
=== More Special Keybindings ===<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="4" bgcolor="#EFEFEF" | '''Basic commands'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Command<br />
!Description<br />
|- align="left"<br />
|'''$ 2T''' || All available commands(common)<br />
|--bgcolor="#eeeeee"<br />
|'''$ (string)2T''' || All available commands starting with (string)<br />
|- align="left"<br />
|'''$ /2T''' || Entire directory structure including Hidden one<br />
|--bgcolor="#eeeeee"<br />
|'''$ 2T''' || Only Sub Dirs inside including Hidden one<br />
|- align="left"<br />
|'''$ *2T''' || Only Sub Dirs inside without Hidden one<br />
|--bgcolor="#eeeeee"<br />
|'''$ ~2T''' || All Present Users on system from "<tt>/etc/passwd</tt>"<br />
|- align="left"<br />
|'''$ $2T''' || All Sys variables<br />
|--bgcolor="#eeeeee"<br />
|'''$ @2T''' || Entries from "<tt>/etc/hosts</tt>"<br />
|- align="left"<br />
|'''$ =2T''' || Output like <tt>ls</tt> or <tt>dir</tt><br />
|}<br />
<div style="float:center">''Note: Here "2T" means Press TAB twice''</div><br />
</div><br />
<br clear="all"/><br />
<br />
===Bash Bang (!) commands===<br />
<br />
Re-run all or part of a previous command.<br />
<br />
!! Run the last command again<br />
!foo Run the most recent command that starts with 'foo' (e.g. !ls)<br />
!foo:p Print out the command that !foo would run<br />
also add it to the command history<br />
!$ Run the last word of the previous command (same as Alt + .)<br />
!$:p Print out the word that !$ would substitute<br />
!* Run the previous command except for the last word<br />
!*:p Print out the previous command except for the last word<br />
^foo^bar Run the previous command replacing foo with bar<br />
<br />
== Bash syntax highlights ==<br />
Bash's command syntax is a superset of the Bourne shell's command syntax. The definitive specification of Bash's command syntax is the [http://www.gnu.org/software/bash/manual/bashref.html Bash Reference Manual] distributed by the GNU project. This section highlights some of Bash's unique syntax features.<br />
<br />
The vast majority of Bourne shell scripts can be executed without alteration by Bash, with the exception of those Bourne shell scripts that happen to reference a Bourne special variable or to use a Bourne builtin command. The Bash command syntax includes ideas drawn from the [[Korn shell]] (ksh) and the [[C shell]] (csh), such as command-line editing, command history, the directory stack, the <tt>$RANDOM</tt> and <tt>$PPID</tt> variables, and [[POSIX]] command substitution syntax: <tt>$(...)</tt>. When being used as an interactive command shell, Bash supports completion of partly typed-in program names, filenames, variable names, etc. when the user presses the TAB key.<br />
<br />
Bash syntax has many extensions that the Bourne shell lacks. Several of those extensions are enumerated here.<br />
<br />
===Integer mathematics===<br />
A major limitation of the Bourne shell is that it cannot perform integer calculations without spawning an external process. Bash can perform in-process integer calculations using the <tt>((...))</tt> command and the <tt>$[...]</tt> variable syntax, as follows:<br />
<br />
VAR=55 # Assign integer 55 to variable VAR.<br />
((VAR = VAR + 1)) # Add one to variable VAR. Note the absence of the '$' character.<br />
((++VAR)) # Another way to add one to VAR. Performs C-style pre-increment.<br />
((VAR++)) # Another way to add one to VAR. Performs C-style post-increment.<br />
echo $[VAR * 22] # Multiply VAR by 22 and substitute the result into the command.<br />
echo $((VAR * 22)) # Another way to do the above.<br />
<br />
The <tt>((...))</tt> command can also be used in conditional statements, because its [[exit status]] is 0 or 1 depending on whether the condition is true or false:<br />
<br />
if ((VAR == Y * 3 + X * 2)); then<br />
echo Yes<br />
fi<br />
<br />
((Z > 23)) && echo Yes<br />
<br />
The <tt>((...))</tt> command supports the following [[relational operator]]s: '<tt>==</tt>', '<tt>!=</tt>', '<tt>&gt;</tt>', '<tt>&lt;</tt>', '<tt>&gt;=</tt>', and '<tt>&lt;=</tt>'.<br />
<br />
Bash cannot perform in-process [[floating point]] calculations. The only Unix command shells capable of this are [[Korn Shell]] (1993 version) and [[zsh]] (starting at version 4.0).<br />
<br />
===I/O redirection===<br />
Bash has several I/O [[Redirection (Unix)|redirection]] syntaxes that the traditional Bourne shell lacks. Bash can redirect [[standard output]] and [[Standard streams|standard error]] at the same time using this syntax:<br />
<br />
command &> file<br />
<br />
which is simpler to type than the equivalent Bourne shell syntax, "<tt>command > file 2>&1</tt>". Bash, since version 2.05b, can redirect standard input from a string using the following syntax (sometimes called "here strings"):<br />
<br />
command <<< "string to be read as standard input"<br />
<br />
If the string contains [[whitespace]], it must be quoted. <br />
<br />
'''Example''':<br />
Redirect standard output to a file, write data, close file, reset stdout<br />
<br />
# make Filedescriptor(FD) 6 a copy of stdout (FD 1)<br />
exec 6>&1<br />
# open file "test.data" for writing<br />
exec 1>test.data<br />
# produce some content<br />
echo "data:data:data"<br />
# close file "test.data"<br />
exec 1>&-<br />
# make stdout a copy of FD 6 (reset stdout)<br />
exec 1>&6<br />
# close FD6<br />
exec 6>&-<br />
<br />
Open and close files<br />
<br />
# open file test.data for reading<br />
exec 6<test.data<br />
# read until end of file<br />
while read -u 6 dta<br />
do<br />
echo "$dta" <br />
done<br />
# close file test.data<br />
exec 6<&-<br />
<br />
Catch output of external commands<br />
<br />
# execute 'find' and store results in VAR<br />
# search for filenames which end with the letter "h"<br />
VAR=$(find . -name "*h")<br />
<br />
====EOF====<br />
The <code>`cat <<EOF`</code> Bash syntax is very useful when one needs to work with multi-line strings in Bash (e.g., when passing a multi-line string to a variable, file, or a piped command).<br />
<br />
* Pass a multiline string to a variable:<br />
<br />
$ sql=$(cat <<EOF<br />
SELECT foo, bar FROM db<br />
WHERE foo='baz'<br />
EOF<br />
)<br />
<br />
The <code>$sql</code> variable now holds newlines as well, you can check it with <code>`echo -e "$sql"`</code>:<br />
SELECT foo, bar FROM db WHERE foo='baz'<br />
<br />
* Pass a multiline string to a file:<br />
<br />
$ cat <<EOF > print.sh<br />
#!/bin/bash<br />
echo \$PWD<br />
echo $PWD<br />
EOF<br />
<br />
The print.sh file now contains:<br />
<br />
#!/bin/bash<br />
echo $PWD<br />
echo /home/user<br />
<br />
* Pass a multiline string to a command/pipe:<br />
<br />
$ cat <<EOF | grep 'b' | tee b.txt | grep 'r'<br />
foo<br />
bar<br />
baz<br />
EOF<br />
<br />
This creates <code>b.txt</code> file with both <code>bar</code> and <code>baz</code> lines but prints only the <code>bar</code>.<br />
<br />
===In-process regular expressions===<br />
Bash 3.0 supports in-process [[regular expression]] matching using the following syntax, reminiscent of [[Perl]]:<br />
<br />
<nowiki>[[ string =~ regex ]]</nowiki><br />
<br />
The regular expression syntax is the same as that documented by the regex(3) [[man page]]. The exit status of the above command is 0 if the regex matches the string, 1 if it does not match. Parenthesized subexpressions in the regular expression can be accessed using the shell variable <tt>BASH_REMATCH</tt>, as follows:<br />
<br />
if <nowiki>[[ abcfoobarbletch =~ 'foo(bar)bl(.*)' ]]</nowiki>; then<br />
echo The regex matches!<br />
echo $BASH_REMATCH -- outputs: foobarbletch<br />
echo ${BASH_REMATCH[1]} -- outputs: bar<br />
echo ${BASH_REMATCH[2]} -- outputs: etch<br />
fi<br />
<br />
This syntax gives performance superior to spawning a separate process to run a <tt>[[grep]]</tt> command, because the regular expression matching takes place within the Bash process. If the regular expression or the string contain whitespace or shell [[metacharacter]]s (such as '<tt>*</tt>' or '<tt>?</tt>'), they should be quoted.<br />
<br />
===Backslash escapes===<br />
Words of the form <tt>$'string'</tt> are treated specially. The word expands to <tt>string</tt>, with backslash-escaped characters replaced as specified by the [[C programming language]]. Backslash escape sequences, if present, are decoded as follows:<br />
<br />
{| class="wikitable"<br />
|+ <big>Backslash Escapes</big><br />
|- <br />
! Backslash<br>Escape !! Expands To ...<br />
|-<br />
| align="center" | <tt>\a</tt> || An alert (bell) character<br />
|-<br />
| align="center" | <tt>\b</tt> || A backspace character<br />
|-<br />
| align="center" | <tt>\e</tt> || An escape character<br />
|-<br />
| align="center" | <tt>\f</tt> || A form feed character<br />
|-<br />
| align="center" | <tt>\n</tt> || A new line character<br />
|-<br />
| align="center" | <tt>\r</tt> || A carriage return character<br />
|-<br />
| align="center" | <tt>\t</tt> || A horizontal tab character<br />
|-<br />
| align="center" | <tt>\v</tt> || A vertical tab character<br />
|-<br />
| align="center" | <tt>\\</tt> || A backslash character<br />
|-<br />
| align="center" | <tt>\'</tt> || A single quote character<br />
|-<br />
| align="center" | <tt>\nnn</tt> || The eight-bit character whose value is the octal value nnn (one to three digits)<br />
|-<br />
| align="center" | <tt>\xHH</tt> || The eight-bit character whose value is the hexadecimal value HH (one or two hex digits)<br />
|-<br />
| align="center" | <tt>\cx</tt> || A control-X character<br />
|}<br />
<br />
The expanded result is single-quoted, as if the dollar sign had not been present.<br />
<br />
A double-quoted string preceded by a dollar sign (<tt>$"..."</tt>) will cause the string to be translated according to the current locale. If the current locale is C or POSIX, the dollar sign is ignored. If the string is translated and replaced, the replacement is double-quoted.<br />
<br />
The full list is as follows:<br />
<pre><br />
# \a an ASCII bell character (07)<br />
# \d the date in "Weekday Month Date" format (e.g., "Tue May 26")<br />
# \D{format} the format is passed to strftime(3) and the result is inserted into the prompt string;<br />
# \e an ASCII escape character (033)<br />
# \h the hostname up to the first `.'<br />
# \H the hostname<br />
# \j the number of jobs currently managed by the shell<br />
# \l the basename of the shell's terminal device name<br />
# \n newline<br />
# \r carriage return<br />
# \s the name of the shell, the basename of $0 (the portion following the final slash)<br />
# \t the current time in 24-hour HH:MM:SS format<br />
# \T the current time in 12-hour HH:MM:SS format<br />
# \@ the current time in 12-hour am/pm format<br />
# \A the current time in 24-hour HH:MM format<br />
# \u the username of the current user<br />
# \v the version of bash (e.g., 2.00)<br />
# \V the release of bash, version + patchelvel (e.g., 2.00.0)<br />
# \w the current working directory<br />
# \W the basename of the current working directory<br />
# \! the history number of this command<br />
# \# the command number of this command<br />
# \$ if the effective UID is 0, a #, otherwise a $<br />
# \nnn the character corresponding to the octal number nnn<br />
# \\ a backslash<br />
# \[ begin a sequence of non-printing characters, which could be used to embed a terminal control sequence into the prompt<br />
# \] end a sequence of non-printing characters<br />
</pre><br />
<br />
===Variables===<br />
When variables are used they are referred to with the <code>$</code> symbol in front of them. There are several useful variables available in the shell program. Here are a few:<br />
<br />
*<code>$$</code> = The PID number of the process executing the shell.<br />
*<code>$?</code> = Exit status variable.<br />
*<code>$0</code> = The name of the command you used to call a program.<br />
*<code>$1</code> = The first argument on the command line.<br />
*<code>$2</code> = The second argument on the command line.<br />
*<code>$n</code> = The nth argument on the command line.<br />
*<code>$*</code> = All the arguments on the command line.<br />
*<code>$#</code> = The number of command line arguments. <br />
<br />
The "shift" command can be used to shift command line arguments to the left, i.e., <code>$1</code> becomes the value of <code>$2</code>, <code>$3 </code>shifts into <code>$2</code>, etc. The command, "shift 2" will shift 2 places meaning the new value of <code>$1</code> will be the old value of <code>$3</code> and so forth.<br />
<br />
* Print specific characters stored in a variable:<br />
# The syntax is ${variable:start:length}. Omitting "length" value gives rest of string.<br />
$ val="hello"<br />
$ echo ${val:0:1} # Print out the first character of $val ("h" in this example)<br />
<br />
===Tests===<br />
There is a function provided by bash called test which returns a true or false value depending on the result of the tested expression (see: [[wikipedia:test (Unix)]] for more details). Its syntax is:<br />
test expression<br />
It can also be implied as follows:<br />
[ expression ]<br />
<br />
The tests below are test conditions provided by the shell:<br />
*<code>-b file</code> = True if the file exists and is block special file.<br />
*<code>-c file</code> = True if the file exists and is character special file.<br />
*<code>-d file</code> = True if the file exists and is a directory.<br />
*<code>-e file</code> = True if the file exists.<br />
*<code>-f file</code> = True if the file exists and is a regular file<br />
*<code>-g file</code> = True if the file exists and the set-group-id bit is set.<br />
*<code>-k file</code> = True if the files' "sticky" bit is set.<br />
*<code>-L file</code> = True if the file exists and is a symbolic link.<br />
*<code>-p file</code> = True if the file exists and is a named pipe.<br />
*<code>-r file</code> = True if the file exists and is readable.<br />
*<code>-s file</code> = True if the file exists and its size is greater than zero.<br />
*<code>-s file</code> = True if the file exists and is a socket.<br />
*<code>-t fd</code> = True if the file descriptor is opened on a terminal.<br />
*<code>-u file</code> = True if the file exists and its set-user-id bit is set.<br />
*<code>-w file</code> = True if the file exists and is writable.<br />
*<code>-x file</code> = True if the file exists and is executable.<br />
*<code>-O file</code> = True if the file exists and is owned by the effective user id.<br />
*<code>-G file</code> = True if the file exists and is owned by the effective group id.<br />
*<code>file1 -nt file2</code> = True if file1 is newer, by modification date, than file2.<br />
*<code>file1 -ot file2</code> = True if file1 is older than file2.<br />
*<code>file1 -ef file2</code> = True if file1 and file2 have the same device and inode numbers.<br />
*<code>-z string</code> = True if the length of the string is 0.<br />
*<code>-n string</code> = True if the length of the string is non-zero.<br />
*<code>string1 = string2</code> = True if the strings are equal.<br />
*<code>string1 != string2</code> = True if the strings are not equal.<br />
*<code>!expr</code> = True if the expr evaluates to false.<br />
*<code>expr1 -a expr2</code> = True if both expr1 and expr2 are true.<br />
*<code>expr1 -o expr2</code> = True if either expr1 or expr2 is true. <br />
<br />
The syntax is:<br />
arg1 OP arg2<br />
<br />
where OP is one of <code>-eq, -ne, -lt, -le, -gt, or -ge</code>. Arg1 and arg2 may be positive or negative integers or the special expression "<code>-l string</code>" which evaluates to the length of string.<br />
<br />
*Examples:<br />
if [ ! -e foo ]; then echo "NO FILE"; else cat foo; fi<br />
if [ -d "/home/bob" -a ! -d "/home/alice" ]; then echo "Bob exists, but not Alice"; fi<br />
<br />
===Colours in bash===<br />
Black 0;30 Dark Gray 1;30<br />
Blue 0;34 Light Blue 1;34<br />
Green 0;32 Light Green 1;32<br />
Cyan 0;36 Light Cyan 1;36<br />
Red 0;31 Light Red 1;31<br />
Purple 0;35 Light Purple 1;35<br />
Brown 0;33 Yellow 1;33<br />
Light Gray 0;37 White 1;37<br />
<br />
Here is an example borrowed from the Bash-Prompt-HOWTO:<br />
<br />
PS1="\[\033[1;34m\][\$(date +%H%M)][\u@\h:\w]$\[\033[0m\] " <br />
<br />
This turns the text blue, displays the time in brackets (very useful for not losing track of time while working), and displays the user name, host, and current directory enclosed in brackets. The "<code>\[\033[0m\]</code>" following the $ returns the colour to the previous foreground colour.<br />
<br />
*Another example:<br />
PS1="\[\033[1;30m\][\[\033[1;34m\]\u\[\033[1;30m\]@\[\033[0;35m\]\h\[\033[1;30m\]] \[\033[0;37m\]\W \[\033[1;30m\]\$\[\033[0m\] " <br />
yields:<br />
[user@host] directory $<br />
<br />
Break down:<br />
<br />
\[\033[1;30m\] - Sets the color for the characters that follow it.<br />
Here 1;30 will set them to Dark Gray.<br />
\u \h \W \$ - From the table above<br />
\[\033[0m\] - Sets the colours back to how they were originally.<br />
<br />
==Bash startup scripts==<br />
When Bash starts, it executes the commands in a variety of different scripts.<br />
<br />
When Bash is invoked as an interactive login shell, or as a non-interactive shell with the <tt>--login</tt> option, it first reads and executes commands from the file <tt>/etc/profile</tt>, if that file exists. After reading that file, it looks for <tt>~/.bash_profile</tt>, <tt>~/.bash_login</tt>, and <tt>~/.profile</tt>, in that order, and reads and executes commands from the first one that exists and is readable. The <tt>--noprofile</tt> option may be used when the shell is started to inhibit this behavior.<br />
<br />
When a login shell exits, Bash reads and executes commands from the file <tt>~/.bash_logout</tt>, if it exists.<br />
<br />
When an interactive shell that is not a login shell is started, Bash reads and executes commands from <tt>~/.bashrc</tt>, if that file exists. This may be inhibited by using the <tt>--norc</tt> option. The <tt>--rcfile file</tt> option will force Bash to read and execute commands from <tt>file</tt> instead of <tt>~/.bashrc</tt>.<br />
<br />
When Bash is started non-interactively, to run a shell script, for example, it looks for the variable <tt>BASH_ENV</tt> in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following command were executed:<br />
<br />
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi<br />
<br />
but the value of the <tt>PATH</tt> variable is not used to search for the file name.<br />
<br />
If Bash is invoked with the name <tt>sh</tt>, it tries to mimic the startup behavior of historical versions of <tt>sh</tt> as closely as possible, while conforming to the [[POSIX]] standard as well. When invoked as an interactive login shell, or a non-interactive shell with the <tt>--login</tt> option, it first attempts to read and execute commands from <tt>/etc/profile</tt> and <tt>~/.profile</tt>, in that order. The <tt>--noprofile</tt> option may be used to inhibit this behavior. When invoked as an interactive shell with the name <tt>sh</tt>, Bash looks for the variable <tt>ENV</tt>, expands its value if it is defined, and uses the expanded value as the name of a file to read and execute. Since a shell invoked as <tt>sh</tt> does not attempt to read and execute commands from any other startup files, the <tt>--rcfile</tt> option has no effect. A non-interactive shell invoked with the name <tt>sh</tt> does not attempt to read any other startup files. When invoked as <tt>sh</tt>, Bash enters ''posix'' mode after the startup files are read.<br />
<br />
When Bash is started in posix mode, as with the <tt>--posix</tt> command line option, it follows the POSIX standard for startup files. In this mode, interactive shells expand the <tt>ENV</tt> variable and commands are read and executed from the file whose name is the expanded value. No other startup files are read.<br />
<br />
Bash attempts to determine when it is being run by the remote shell daemon, usually <tt>rshd</tt>. If Bash determines it is being run by <tt>rshd</tt>, it reads and executes commands from <tt>~/.bashrc</tt>, if that file exists and is readable. It will not do this if invoked as <tt>sh</tt>. The <tt>--norc</tt> option may be used to inhibit this behavior, and the <tt>--rcfile</tt> option may be used to force another file to be read, but <tt>rshd</tt> does not generally invoke the shell with those options or allow them to be specified.<br />
<br />
===Environment variables===<br />
<br />
;<code>$CDPATH</code> : does for the cd built-in what PATH does for executables. By setting this wisely, you can cut down on the number of key-strokes you enter per day.<br />
::Example<br />
$ export CDPATH=.:~:~/docs:~/src:~/src/ops/docs:/mnt:/usr/src/redhat:/usr/src/redhat/RPMS:/usr/src:/usr/lib:/usr/local:/software:/software/redhat<br />
::Using this, cd i386 would likely take you to /usr/src/redhat/RPMS/i386 on a Red Hat Linux system. Make sure that you do include . in the list or you'll find that you can't change to directories relative to your current one without prefixing them with ./ <br />
;<code>$HISTIGNORE</code> : Set this to to avoid having consecutive duplicate commands and other not so useful information appended to the history list. This will cut down on hitting the up arrow endlessly to get to the command before the one you just entered twenty times. It will also avoid filling a large percentage of your history list with useless commands.<br />
::Example<br />
$ export HISTIGNORE="&:ls:ls *:mutt:[bf]g:exit"<br />
::Using this, consecutive duplicate commands, invocations of ls, executions of the mutt mail client without any additional parameters, plus calls to the bg, fg and exit built-ins will not be appended to the history list. <br />
;<code>$MAILPATH</code> : bash will warn you of new mail in any folder appended to MAILPATH. This is very handy if you use a tool like procmail to presort your e-mail into folders.<br />
::Try adding the following to your ~/.bash_profile to be notified when any new mail is deposited in any mailbox under ~/Mail.<br />
MAILPATH=/var/spool/mail/$USER<br />
for i in ~/Mail/[^.]*<br />
do<br />
MAILPATH=$MAILPATH:$i'?You have new mail in your ${_##*/} folder'<br />
done<br />
export MAILPATH<br />
unset i<br />
<br />
;<code>$TMOUT</code> : If you set this to a value greater than zero, bash will terminate after this number of seconds have elapsed if no input arrives. <br />
<br />
==shopt==<br />
Check your shell options by issuing the following command:<br />
shopt # see: help shopt<br />
A typical output should look something like the following:<br />
<pre><br />
cdable_vars off<br />
cdspell off<br />
checkhash off<br />
checkwinsize off<br />
cmdhist on<br />
dotglob off<br />
execfail off<br />
expand_aliases on<br />
extglob off<br />
histreedit off<br />
histappend off<br />
histverify off<br />
hostcomplete on<br />
huponexit off<br />
interactive_comments on<br />
lithist off<br />
login_shell on<br />
mailwarn off<br />
no_empty_cmd_completion off<br />
nocaseglob off<br />
nullglob off<br />
progcomp on<br />
promptvars on<br />
restricted_shell off<br />
shift_verbose off<br />
sourcepath on<br />
xpg_echo off<br />
</pre><br />
<br />
where each of the above mean (see <code>man bash</code> or <code>help set</code> for more info.):<br />
<br />
; cdable_vars : an argument to the cd builtin command that is not a directory is assumed to be the name of a variable dir to change to.<br />
; cdspell : minor errors in the spelling of a directory component in a cd command will be corrected. <br />
; checkhash : bash checks that a command found in the hash table exists before execute it. If no longer exists, a path search is performed.<br />
; checkwinsize : bash checks the window size after each command and, if necessary, updates the values of LINES and COLUMNS.<br />
; cmdhist : bash attempts to save all lines of a multiple-line command in the same history entry. Allows re-editing of multi-line commands.<br />
; dotglob : bash includes filenames beginning with a `.' in the results of pathname expansion.<br />
; execfail : a non-int shell will not exit if it cannot execute the file specified as an argument to the exec builtin command, like int sh.<br />
; expand_aliases : aliases are expanded as described above under ALIASES. This option is enabled by default for interactive shells.<br />
; extglob : the extended pattern matching features described above under Pathname Expansion are enabled.<br />
; histappend : the history list is appended to the file named by the value of the HISTFILE variable when shell exits, no overwriting the file.<br />
; hostcomplete : and readline is being used, bash will attempt to perform hostname completion when a word containing a @ is being completed<br />
; huponexit : bash will send SIGHUP to all jobs when an interactive login shell exits.<br />
; interactive_comments : allow a word beginning with # to cause that word and all remaining characters on that line to be ignored in an interactive shell<br />
; lithist : if cmdhist option is enabled, multi-line commands are saved to the history with embedded newlines rather than using semicolon<br />
; login_shell : shell sets this option if it is started as a login shell (see INVOCATION above). The value may not be changed.<br />
; mailwarn : file that bash is checking for mail has been accessed since the last checked, ``The mail in mailfile has been read'' is displayed.<br />
; no_empty_cmd_completion : bash will not attempt to search the PATH for possible completions when completion is attempted on an empty line.<br />
; nocaseglob : bash matches filenames in a case-insensitive fashion when performing pathname expansion (see Pathname Expansion above).<br />
; nullglob : bash allows patterns which match no files (see Pathname Expansion above) to expand to a null string, rather than themselves.<br />
; progcomp : the programmable completion facilities (see Programmable Completion above) are enabled. This option is enabled by default.<br />
; promptvars : prompt strings undergo variable and parameter expansion after being expanded as described in PROMPTING above. <br />
; shift_verbose : the shift builtin prints an error message when the shift count exceeds the number of positional parameters.<br />
; sourcepath : the source (.) builtin uses the value of PATH to find the directory containing the file supplied as an argument.<br />
; xpg_echo : the echo builtin expands backslash-escape sequences by default.<br />
<br />
==Manual pages==<br />
<br />
A '''man page''' (short for '''manual page''') is a form of software documentation usually found on a Unix or Unix-like operating system. Topics covered include computer programs (including library and system calls), formal standards and conventions, and even abstract concepts. A user may invoke a man page by issuing the man command.<br />
<br />
The manual is generally split into eight numbered sections, organized as follows<br />
{| class="wikitable"<br />
! Section<br />
! Description<br />
|-<br />
| 1<br />
| General commands<br />
|-<br />
| 2<br />
| System calls<br />
|-<br />
| 3<br />
| Library functions, covering, in particular, the C standard library<br />
|-<br />
| 4<br />
| Special files (usually devices, those found in <code>/dev</code>) and drivers<br />
|-<br />
| 5<br />
| File formats and conventions<br />
|-<br />
| 6<br />
| Games and screensavers<br />
|-<br />
| 7<br />
| Miscellaneous<br />
|-<br />
| 8<br />
| System administration commands and daemons<br />
|}<br />
<br clear="all"/><br />
<br />
==Bash (Shellshock) vulnerability==<br />
* Run the following command from a Bash shell:<br />
$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"<br />
<br />
If you see the following output, you are '''VULNERABLE''':<br />
<br />
<div style="padding: 1em; margin: 10px; border: 2px solid #f00;"><br />
vulnerable<br />
bash: BASH_FUNC_x(): line 0: syntax error near unexpected token `)'<br />
bash: BASH_FUNC_x(): line 0: `BASH_FUNC_x() () { :;}; echo vulnerable'<br />
bash: error importing function definition for `BASH_FUNC_x'<br />
test<br />
</div><br />
<br />
If you see the following output, you are ''NOT VULNERABLE'':<br />
<div style="padding: 1em; margin: 10px; border: 2px solid #0f0;"><br />
bash: warning: x: ignoring function definition attempt<br />
bash: error importing function definition for `BASH_FUNC_x'<br />
test<br />
</div><br />
<br />
If you are vulnerable, make sure you update Bash to the latest version your Linux distribution has to offer. If you still see the same vulnerability after updating from a repository, you should probably down the the latest [https://ftp.gnu.org/gnu/bash/ source code] of Bash and compile it on your own. Do '''''not''''' take this bug lightly!<br />
<br />
==Christoph's Additions==<br />
<pre><br />
PS1='\[\033[0;31m\][\u@christophchamp]\[\033[00m\]:\[\033[0;33m\]`pwd`\[\033[00m\]> '<br />
<br />
[christoph@christophchamp]:/home/christoph><br />
</pre><br />
<br />
<pre><br />
$ echo $BASH_VERSION<br />
4.4.20(1)-release<br />
<br />
$ echo $BASH_VERSION[0]<br />
4.4.20(1)-release[0]<br />
<br />
$ echo $OSTYPE<br />
linux-gnu<br />
</pre><br />
<br />
==References==<br />
* [[wikipedia:Bash]]<br />
* [[wikibooks:Bourne Shell Scripting]]<br />
<br />
==External links==<br />
*[http://www.gnu.org/software/bash/bash.html Bash home page]<br />
*[ftp://ftp.cwru.edu/pub/bash/FAQ Bash FAQ]<br />
*[http://groups.google.com/groups?dq=&lr=&ie=UTF-8&group=gnu.announce&selm=mailman.1865.1091019304.1960.info-gnu%40gnu.org Bash 3.0 Announcement]<br />
*[http://www.network-theory.co.uk/bash/manual/ The GNU Bash Reference Manual], ([http://www.network-theory.co.uk/docs/bashref/ HTML version]) by Chet Ramey and Brian Fox, ISBN 0954161777<br />
===Bash guides from the Linux Documentation Project===<br />
*[http://www.tldp.org/LDP/Bash-Beginners-Guide/html/ Bash Guide for Beginners]<br />
*[http://www.tldp.org/HOWTO/Bash-Prog-Intro-HOWTO.html BASH Programming - Introduction HOW-TO]<br />
*[http://en.tldp.org/LDP/abs/html/ Advanced Bash-Scripting Guide] &mdash; An in-depth exploration of the art of shell scripting<br />
*[http://tldp.org/LDP/GNU-Linux-Tools-Summary/html/text-manipulation-tools.html Text manipulation tools] &mdash; from GNU/Linux Command-Line Tools Summary<br />
===Other guides and tutorials===<br />
*[http://wooledge.org:8000/BashPitfalls Bash Pitfalls] / [http://wooledge.org:8000/BashFAQ BashFAQ]<br />
*[http://www.cyberciti.biz/nixcraft/linux/docs/uniqlinuxfeatures/lsst/ Linux Shell Scripting Tutorial - A Beginner's handbook]<br />
*[http://www.linux.ie/newusers/beginners-linux-guide/shells.php About Shells]<br />
*[http://hypexr.homelinux.org/bash_tutorial.html Beginners Bash Tutorial]<br />
*[http://deadman.org/bash.html Advancing in the Bash Shell tutorial]<br />
*[http://www.vias.org/linux-knowhow/bbg_intro_10.html Linux Know-How] including the Bash Guide for Beginners<br />
*[http://www.bashscripts.org/ BashScripts.org]<!--<br />
*[http://www-128.ibm.com/developerworks/library/l-bash.html Bash by example - Part 1]<br />
*[http://www.ibm.com/developerworks/library/l-bash2.html Bash by example - Part 2]<br />
*[http://www.ibm.com/developerworks/library/l-bash3.html Bash by example - Part 3]--><br />
*[http://www.faqs.org/docs/linux_intro/x7003.html Common features]<br />
*[http://www-128.ibm.com/developerworks/aix/library/au-satbash.html?ca=dgr-lnxw97bash Get the most out of bash]<br />
<br />
===Custom .bashrc or .bash_profile===<br />
*[http://static.askapache.com/askapache-bash-profile.txt askapache-bash-profile.txt]<br />
<br />
[[Category:Scripting languages]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Job_control&diff=8273Job control2023-04-21T04:42:20Z<p>Christoph: /* Process state codes */</p>
<hr />
<div>When using [[Linux]] via a terminal, a user will initially only have a single process running, their login shell. Most tasks (directory listing, editing files, etc.) can easily be accomplished by letting the program take control of the terminal and returning control to the shell when the program exits; however, sometimes the user will wish to carry out a lengthy task in the background while using the terminal for another purpose. '''Job control''' is a facility developed to make this possible, by allowing the user to start programs in the '''background''', send programs into the background, bring background programs into the '''foreground''', and start and stop running programs. Programs under the influence of a job control facility are referred to as '''jobs'''.<br />
<br />
== Implementation ==<br />
* Typically, the shell keeps a list of jobs in a '''job table'''. A job consists of all the members of a [[pipeline]].<br />
* A program can be started as a background task by appending <tt>&</tt><ref>This section uses [[Bash]] syntax; other shells offer similar functionality under other names.</ref> to the command line; its output is directed to the terminal (potentially interleaved with other programs' output) but it cannot read from the terminal input. <br />
* A task running in the foreground can be stopped by typing the suspend character (Ctrl+Z); this sends [[SIGTSTP]] to the process and returns control to the shell. <br />
* The process can be resumed as a background job with the <tt>bg</tt> [[Job control#Job Control Builtins|builtin]] or as the foreground job with <tt>fg</tt>; in either case the shell redirects I/O appropriately and sends [[SIGCONT]] to the process. <br />
* <tt>jobs</tt> will list the background jobs existing in the job table, along with their job number and job state (stopped or running).<br />
* The <tt>kill</tt> builtin (''not'' <code>/bin/kill</code>) can signal processes by job number as well as by process ID.<br />
* <tt>disown</tt> can be used to remove jobs from the job table, converting them from jobs into daemons so that they continue executing when the user logs out.<br />
<br />
=== Job control basics ===<br />
<br />
Job control refers to the ability to selectively stop (suspend) the execution of processes and continue (resume) their execution at a later point. A user typically employs this facility via an interactive interface supplied jointly by the system's terminal driver and Bash.<br />
<br />
The shell associates a job with each pipeline. It keeps a table of currently executing jobs, which may be listed with the jobs command. When Bash starts a job asynchronously, it prints a line that looks like:<br />
<br />
[1] 25647<br />
<br />
indicating that this job is job number 1 and that the process ID of the last process in the pipeline associated with this job is <code>25647</code>. All of the processes in a single pipeline are members of the same job. Bash uses the job abstraction as the basis for job control.<br />
<br />
To facilitate the implementation of the user interface to job control, the operating system maintains the notion of a current terminal process group ID. Members of this process group (processes whose process group ID is equal to the current terminal process group ID) receive keyboard-generated signals such as <code>SIGINT</code>. These processes are said to be in the foreground. Background processes are those whose process group ID differs from the terminal's; such processes are immune to keyboard-generated signals. Only foreground processes are allowed to read from or write to the terminal. Background processes which attempt to read from (write to) the terminal are sent a <code>SIGTTIN</code> (<code>SIGTTOU</code>) signal by the terminal driver, which, unless caught, suspends the process.<br />
<br />
If the operating system on which Bash is running supports job control, Bash contains facilities to use it. Typing the suspend character (typically '<tt>^Z</tt>', Control-Z) while a process is running causes that process to be stopped and returns control to Bash. Typing the delayed suspend character (typically '<tt>^Y</tt>', Control-Y) causes the process to be stopped when it attempts to read input from the terminal, and control to be returned to Bash. The user then manipulates the state of this job, using the <tt>bg</tt> command to continue it in the background, the <tt>fg</tt> command to continue it in the foreground, or the kill command to kill it. A '<tt>^Z</tt>' takes effect immediately, and has the additional side effect of causing pending output and typeahead to be discarded.<br />
<br />
There are a number of ways to refer to a job in the shell. The character '<code>%</code>' introduces a job name.<br />
<br />
Job number n may be referred to as '<code>%n</code>'. The symbols '<code>%%</code>' and '<code>%+</code>' refer to the shell's notion of the current job, which is the last job stopped while it was in the foreground or started in the background. The previous job may be referenced using '<code>%-</code>'. In output pertaining to jobs (eg, the output of the jobs command), the current job is always flagged with a '<code>+</code>', and the previous job with a '<code>-</code>'.<br />
<br />
A job may also be referred to using a prefix of the name used to start it, or using a substring that appears in its command line. For example, '<code>%ce</code>' refers to a stopped ce job. Using '<code>%?ce</code>', on the other hand, refers to any job containing the string '<code>ce</code>' in its command line. If the prefix or substring matches more than one job, Bash reports an error.<br />
<br />
Simply naming a job can be used to bring it into the foreground: '<code>%1</code>' is a synonym for '<code>fg %1</code>', bringing job 1 from the background into the foreground. Similarly, '<code>%1 &</code>' resumes job 1 in the background, equivalent to '<code>bg %1</code>'.<br />
<br />
The shell learns immediately whenever a job changes state. Normally, Bash waits until it is about to print a prompt before reporting changes in a job's status so as to not interrupt any other output. If the '<code>-b</code>' option to the set builtin is enabled, Bash reports such changes immediately (see: "The Set Builtin"). Any trap on <code>SIGCHLD</code> is executed for each child process that exits.<br />
<br />
If an attempt to exit Bash is while jobs are stopped, the shell prints a message warning that there are stopped jobs. The jobs command may then be used to inspect their status. If a second attempt to exit is made without an intervening command, Bash does not print another warning, and the stopped jobs are terminated. <br />
<br />
=== Job control builtins ===<br />
<br />
* '''bg'''<br />
:: <tt>bg [jobspec]</tt><br />
:: Resume the suspended job <code>jobspec</code> in the background, as if it had been started with '<code>&</code>'. If <code>jobspec</code> is not supplied, the current job is used. The return status is zero unless it is run when job control is not enabled, or, when run with job control enabled, if <code>jobspec</code> was not found or <code>jobspec</code> specifies a job that was started without job control.<br />
<br />
* '''fg'''<br />
:: <tt>fg [jobspec]</tt><br />
:: Resume the job <code>jobspec</code> in the foreground and make it the current job. If jobspec is not supplied, the current job is used. The return status is that of the command placed into the foreground, or non-zero if run when job control is disabled or, when run with job control enabled, <code>jobspec</code> does not specify a valid job or <code>jobspec</code> specifies a job that was started without job control.<br />
<br />
* '''jobs'''<br />
:: <tt>jobs [-lnprs] [jobspec]</tt><br />
:: <tt>jobs -x command [arguments]</tt><br />
:: The first form lists the active jobs. The options have the following meanings:<br />
::: <tt>-l</tt><br />
:::: List process IDs in addition to the normal information.<br />
::: <tt>-n</tt><br />
:::: Display information only about jobs that have changed status since the user was last notified of their status.<br />
::: <tt>-p</tt><br />
:::: List only the process ID of the job's process group leader.<br />
::: <tt>-r</tt><br />
:::: Restrict output to running jobs.<br />
::: <tt>-s</tt><br />
:::: Restrict output to stopped jobs. <br />
:: If <code>jobspec</code> is given, output is restricted to information about that job. If <code>jobspec</code> is not supplied, the status of all jobs is listed.<br />
:: If the '<code>-x</code>' option is supplied, jobs replaces any <code>jobspec</code> found in command or arguments with the corresponding process group ID, and executes command, passing it arguments, returning its exit status.<br />
<br />
* '''kill'''<br />
:: <tt>kill [-s sigspec] [-n signum] [-sigspec] jobspec or pid</tt><br />
:: <tt>kill -l [exit_status]</tt><br />
:: Send a signal specified by sigspec or signum to the process named by job specification <code>jobspec</code> or process ID <code>pid</code>. <code>sigspec</code> is either a signal name such as <code>SIGINT</code> (with or without the <code>SIG</code> prefix) or a signal number; <code>signum</code> is a signal number. If <code>sigspec</code> and <code>signum</code> are not present, <code>SIGTERM</code> is used. The '<code>-l</code>' option lists the signal names. If any arguments are supplied when '<code>-l</code>' is given, the names of the signals corresponding to the arguments are listed, and the return status is zero. <code>exit_status</code> is a number specifying a signal number or the exit status of a process terminated by a signal. The return status is zero if at least one signal was successfully sent, or non-zero if an error occurs or an invalid option is encountered.<br />
<br />
* '''wait'''<br />
:: <tt>wait [jobspec or pid]</tt><br />
:: Wait until the child process specified by process ID <code>pid</code> or job specification <code>jobspec</code> exits and return the exit status of the last command waited for. If a <code>jobspec</code> is given, all processes in the job are waited for. If no arguments are given, all currently active child processes are waited for, and the return status is zero. If neither <code>jobspec</code> nor <code>pid</code> specifies an active child process of the shell, the return status is <code>127</code>.<br />
<br />
* '''disown'''<br />
:: <tt>disown [-ar] [-h] [jobspec ...]</tt><br />
:: Without options, each jobspec is removed from the table of active jobs. If the '<code>-h</code>' option is given, the job is not removed from the table, but is marked so that <code>SIGHUP</code> is not sent to the job if the shell receives a <code>SIGHUP</code>. If <code>jobspec</code> is not present, and neither the '<code>-a</code>' nor '<code>-r</code>' option is supplied, the current job is used. If no <code>jobspec</code> is supplied, the '<code>-a</code>' option means to remove or mark all jobs; the '<code>-r</code>' option without a <code>jobspec</code> argument restricts operation to running jobs.<br />
<br />
* '''suspend'''<br />
:: <tt>suspend [-f]</tt><br />
:: Suspend the execution of this shell until it receives a <code>SIGCONT</code> signal. The '<code>-f</code>' option means to suspend even if the shell is a login shell.<br />
<br />
* '''set'''<br />
:: (see: [http://cnswww.cns.cwru.edu/~chet/bash/bashref.html#SEC59 Bash Reference Manual])<br />
<br />
When job control is not active, the <tt>kill</tt> and <tt>wait</tt> builtins do not accept <code>jobspec</code> arguments. They must be supplied process IDs.<br />
<br />
=== Job control variables ===<br />
<br />
; <tt>auto_resume</tt> : This variable controls how the shell interacts with the user and job control. If this variable exists then single word simple commands without redirections are treated as candidates for resumption of an existing job. There is no ambiguity allowed; if there is more than one job beginning with the string typed, then the most recently accessed job will be selected. The name of a stopped job, in this context, is the command line used to start it. If this variable is set to the value '<code>exact</code>', the string supplied must match the name of a stopped job exactly; if set to '<code>substring</code>', the string supplied needs to match a substring of the name of a stopped job. The '<code>substring</code>' value provides functionality analogous to the '<code>%?</code>' job ID (see [[Job control#Job Control Basics|Job Control Basics]]). If set to any other value, the supplied string must be a prefix of a stopped job's name; this provides functionality analogous to the '<code>%</code>' job ID.<br />
<br />
==Process state codes==<br />
''Note: See <code>man ps</code> for complete list; under "PROCESS STATE CODES" section.''<br />
;R : runnable<br />
;S : stopped (usually because it is waiting on input or doing something else that does not require processing time from the CPU)<br />
;D : uninterruptible IO wait (usually requires a reboot to clear)<br />
;T : process is stopped (send a SIGCONT to start the process again)<br />
;Z : process is defunct (either kill the parent or reboot)<br />
<br />
==Signals==<br />
''Note: Look in <code>/usr/include/asm/signal.h</code> for a complete list of signals, or execute <code>kill -l</code>.''<br />
*<code> 2 SIGBUS</code> &mdash; Bus Error<br />
*<code> 9 SIGKILL</code> &mdash; Terminate a process. Noninterruptible.<br />
*<code>11 SIGSEGV</code> &mdash; Segmentation Fault.<br />
*<code>15 SIGTERM</code> &mdash; Terminate a process in an orderly fashion.<br />
*<code>17 SIGCHLD</code> &mdash; Child process terminated<br />
*<code>18 SIGCONT</code> &mdash; Continue executing after a stop<br />
*<code>19 SIGSTOP</code> &mdash; Stop executing<br />
<br />
== See also ==<br />
*[[nohup]]<br />
*[[ps]]<br />
<br />
== References and notes ==<br />
<references/><br />
<br />
== External links ==<br />
* [http://cnswww.cns.cwru.edu/~chet/bash/bashref.html#SEC87 Job control in Bash]<br />
<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Curl&diff=8272Curl2023-04-21T04:37:00Z<p>Christoph: /* Miscellaneous examples */</p>
<hr />
<div>'''cURL''' is a [[:Category:Linux Command Line Tools|command line tool]] for transferring files with URL syntax, supporting FTP, FTPS, HTTP, HTTPS, TFTP, Telnet, DICT, FILE and LDAP. cURL supports HTTPS certificates, HTTP POST, HTTP PUT, FTP uploading, Kerberos, HTTP form-based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM and Negotiate for HTTP and kerberos4 for FTP), file transfer resume, HTTP proxy tunnelling, and many other features. cURL is open source/free software distributed under MIT License.<br />
<br />
The main purpose and use for cURL is to automate unattended file transfers or sequences of operations. It is for example a good tool for simulating a user's actions in a web browser.<br />
<br />
Libcurl is the corresponding library/API that users may incorporate into their programs; cURL acts as a stand-alone wrapper to the libcurl library. libcurl is being used to provide URL transfer capabilities to numerous applications, Open Source as well as many commercial ones.<br />
<br />
==Common options==<br />
;<code>-o</code>: save the data in specific file<br />
;<code>-c</code>: resume interrupted downloads<br />
;<code>-O</code>: download multiple URLs (seperated with space)<br />
;<code>-l</code>: view the HTTP header's information<br />
;<code>-I</code>: fetch only the header information<br />
;<code>-v</code>: view the entire TLS handshake<br />
;<code>-k</code>: ignore invalid or self-signed certificates<br />
;<code>-C</code>: resume the file transfer<br />
;<code>-f</code>: fail silently<br />
<br />
==Simple usage==<br />
<br />
* Get the main page from firefox's web-server:<br />
<br />
$ curl <nowiki>http://www.firefox.com/</nowiki><br />
<br />
* Get the README file the user's home directory at funet's ftp-server:<br />
<br />
$ curl <nowiki>ftp://ftp.funet.fi/README</nowiki><br />
<br />
* Get a web page from a server using port 8000:<br />
<br />
$ curl <nowiki>http://www.weirdserver.com:8000/</nowiki><br />
<br />
* Get a list of a directory of an FTP site:<br />
<br />
$ curl <nowiki>ftp://cool.haxx.se/</nowiki><br />
<br />
* Get a gopher document from a gopher server:<br />
<br />
$ curl <nowiki>gopher://gopher.funet.fi</nowiki><br />
#~OR~<br />
$ curl <nowiki>gopher://gopher.quux.org:70</nowiki><br />
<br />
* Get the definition of curl from a dictionary:<br />
<br />
$ curl <nowiki>dict://dict.org/m:curl</nowiki><br />
<br />
* Fetch two documents at once:<br />
<br />
$ curl <nowiki>ftp://cool.haxx.se/ http://www.weirdserver.com:8000/</nowiki><br />
<br />
* Follow [[Rewrite engine|301 redirects]]:<br />
$ curl -I <nowiki>http://xtof.ch/skills</nowiki><br />
HTTP/1.1 301 Moved Permanently<br />
Date: Wed, 15 Apr 2015 21:24:28 GMT<br />
Server: Apache/2.2.15 (CentOS)<br />
Location: <nowiki>http://wiki.christophchamp.com/index.php/Technical_and_Specialized_Skills</nowiki><br />
Connection: close<br />
Content-Type: text/html; charset=iso-8859-1<br />
<br />
# xtof.ch/skills redirects to <nowiki>http://wiki.christophchamp.com/index.php/Technical_and_Specialized_Skills</nowiki><br />
# So, use "-L" to follow that redirect:<br />
$ curl -L <nowiki>http://xtof.ch/skills</nowiki><br />
<br />
==Miscellaneous examples==<br />
<br />
* Check on the amount of time it takes to load a website (lookup/connect/transfer times):<br />
$ for i in $(seq 1 3); do<br />
curl -so /dev/null www.example.com \<br />
-w "time_namelookup: %{time_namelookup}\<br />
\ttime_connect: %{time_connect}\<br />
\ttime_starttransfer: %{time_starttransfer}\<br />
\ttime_total: %{time_total}\n";<br />
done<br />
time_namelookup: 0.004 time_connect: 0.005 time_starttransfer: 0.854 time_total: 0.964<br />
time_namelookup: 0.004 time_connect: 0.005 time_starttransfer: 0.575 time_total: 0.617<br />
time_namelookup: 0.004 time_connect: 0.005 time_starttransfer: 0.550 time_total: 0.555<br />
<br />
* Retries:<br />
$ curl -4 --retry 25 --retry-delay 20 --retry-connrefused<br />
<br />
* Share files via cURL:<br />
<pre><br />
$ curl -F "file=@foo.jpg" 0x0.st<br />
https://0x0.st/ou6C.jpg<br />
</pre><br />
<br />
== Download to a file ==<br />
<br />
* Get a web page and store in a local file:<br />
<br />
$ curl -o thatpage.html <nowiki>http://www.example.com/</nowiki><br />
<br />
* Get a web page and store in a local file, make the local file get the name of the remote document (if no file name part is specified in the URL, this will fail):<br />
<br />
$ curl -O <nowiki>http://www.example.com/index.html</nowiki><br />
<br />
* Fetch two files and store them with their remote names:<br />
<br />
$ curl -O www.haxx.se/index.html -O curl.haxx.se/download.html<br />
<br />
==Using cURL for fast downloads==<br />
Suppose you want to download the Ubuntu 14.04.3 LTS (Trusty Tahr; 64-bit) ISO from the following three [https://launchpad.net/ubuntu/+cdmirrors mirrors]:<br />
<br />
$ curl -sI <nowiki>http://mirror.pnl.gov/releases/14.04/ubuntu-14.04.3-desktop-amd64.iso</nowiki> |\<br />
awk '/^Content-Length/{iso_size=$2/1024^2; print iso_size}'<br />
<br />
$ url1=<nowiki>http://mirror.pnl.gov/releases/14.04/ubuntu-14.04.3-desktop-amd64.iso</nowiki><br />
$ url2=<nowiki>http://mirror.scalabledns.com/ubuntu-releases/14.04.3/ubuntu-14.04.3-desktop-amd64.iso</nowiki><br />
$ url3=<nowiki>http://mirrors.rit.edu/ubuntu-releases/14.04.3/ubuntu-14.04.3-desktop-amd64.iso</nowiki><br />
<br />
Get the total size (in bytes) of the ISO:<br />
$ ISOURL=<nowiki>http://mirror.pnl.gov/releases/14.04/ubuntu-14.04.3-desktop-amd64.iso</nowiki><br />
$ iso_size=$(curl -sI ${ISOURL} | awk '/^Content-Length/{print $2}')<br />
<br />
The total size of the ISO is 1054867456 bytes (~1.0GB). Using cURL's "<code>--range</code>" option, we can download that ISO in 3 parts from the above 3 different mirrors simultaneously with the following commands (do not forget the "<code>&</code>" at the end so each download is backgrounded):<br />
<br />
$ curl -r 0-499999999 -o ubuntu-14.04.3-desktop-amd64.iso.part1 $url1 & # 1st 500MB<br />
$ curl -r 500000000-999999999 -o ubuntu-14.04.3-desktop-amd64.iso.part2 $url2 & # 2nd 500MB<br />
$ curl -r 1000000000- -o ubuntu-14.04.3-desktop-amd64.iso.part3 $url3 & # remaining bytes<br />
<br />
After all three parts have downloaded, <code>`cat`</code> them all together into a single ISO<br />
$ cat ubuntu-14.04.3-desktop-amd64.iso.part? > ubuntu-14.04.3-desktop-amd64.iso<br />
<br />
Finally, check the integrity of the ISO using the [http://mirror.pnl.gov/releases/14.04/MD5SUMS MD5SUM] for the original ISO:<br />
$ wget -c <nowiki>http://mirror.pnl.gov/releases/14.04/MD5SUMS</nowiki><br />
$ grep ubuntu-14.04.3-desktop-amd64.iso MD5SUMS<br />
$ md5sum ubuntu-14.04.3-desktop-amd64.iso<br />
<br />
The two values should be ''identical''. Et voilà! You have downloaded that ISO (potentially) much faster than downloading it as one single ISO.<br />
<br />
Note: You could automate the process in a [http://pastie.org/284370#1 script]. You would use the <code>${iso_size}</code> from above together with the following lines:<br />
$ blocksize=$(expr 1024 \* 512)<br />
$ curl -\# -r $sum-$(($sum+$blocksize)) -o ubuntu-14.04.3-desktop-amd64.iso.part${num} $url1 &<br />
<br />
The "<code>-\#</code>" is to switch from the regular meter to a progress "bar".<br />
<br />
==Write out variables==<br />
<br />
<br />
With curl "write-out" variables, one can make curl display information on STDOUT after a completed transfer. The format is a string that may contain plain text mixed with any number of variables. The format can be specified as a literal "string", or you can have curl read the format from a file with "<code>@filename</code>" and to tell curl to read the format from STDIN you write "<code>@-</code>".<br />
<br />
The variables present in the output format will be substituted by the value or text that curl thinks fit, as described below. All variables are specified as <code>%{variable_name}</code> and to output a normal "<code>%</code>" you just write them as "<code>%%</code>". You can output a newline by using "<code>\n</code>", a carriage return with "<code>\r</code>", or a tab space with "<code>\t</code>".<br />
<br />
As an example on how to use write out variables, consider my personal URL shortener (or TinyURL) website:<br />
$ curl -I <nowiki>http://www.xtof.ch</nowiki><br />
<pre><br />
HTTP/1.1 200 OK<br />
Date: Wed, 17 Feb 2016 01:39:21 GMT<br />
Server: Apache/2.2.15 (CentOS)<br />
Last-Modified: Sun, 17 May 2015 21:22:51 GMT<br />
ETag: "2a073-d2-5164dadfec0c0"<br />
Accept-Ranges: bytes<br />
Content-Length: 210<br />
X-CLI: Website by Christoph Champ<br />
X-Owner-URL: www.christophchamp.com<br />
X-Wiki-URL: http://xtof.ch/wiki<br />
Connection: close<br />
Content-Type: text/html; charset=UTF-8<br />
</pre><br />
<br />
<pre><br />
$ read -r -d '' WRITE_OUT_VARS <<'EOF'<br />
content_type: %{content_type}<br />
http_code: %{http_code}<br />
http_connect: %{http_connect}<br />
local_ip: %{local_ip}<br />
local_port: %{local_port}<br />
num_connects: %{num_connects}<br />
num_redirects: %{num_redirects}<br />
redirect_url: %{redirect_url}<br />
remote_ip: %{remote_ip}<br />
remote_port: %{remote_port}<br />
size_download: %{size_download}<br />
size_header: %{size_header}<br />
size_upload: %{size_upload}<br />
speed_download: %{speed_download}<br />
speed_upload: %{speed_upload}<br />
ssl_verify_result: %{ssl_verify_result}<br />
time_connect: %{time_connect}<br />
time_namelookup: %{time_namelookup}<br />
time_redirect: %{time_redirect}<br />
time_starttransfer: %{time_starttransfer}<br />
time_total: %{time_total}<br />
url_effective: %{url_effective}<br />
EOF<br />
</pre><br />
<br />
If I pass a TinyURL to my website, I get the following:<br />
<br />
$ curl -sw "${WRITE_OUT_VARS}\n" xtof.ch/cv -o /dev/null<br />
<pre><br />
content_type: text/html; charset=iso-8859-1<br />
http_code: 301<br />
http_connect: 000<br />
local_ip: 10.x.x.x<br />
local_port: 56646<br />
num_connects: 1<br />
num_redirects: 0<br />
redirect_url: http://wiki.christophchamp.com/index.php/Curriculum_Vitae<br />
remote_ip: 67.207.152.20<br />
remote_port: 80<br />
size_download: 338<br />
size_header: 257<br />
size_upload: 0<br />
speed_download: 3238.000<br />
speed_upload: 0.000<br />
ssl_verify_result: 0<br />
time_connect: 0.055<br />
time_namelookup: 0.004<br />
time_redirect: 0.000<br />
time_starttransfer: 0.104<br />
time_total: 0.104<br />
url_effective: HTTP://xtof.ch/cv<br />
</pre><br />
<br />
If I tell curl to follow the redirect URL (i.e., with "<code>-L</code>"), I get the following:<br />
<br />
$ curl -sLw "${WRITE_OUT_VARS}\n" xtof.ch/cv -o /dev/null<br />
<pre><br />
content_type: text/html; charset=UTF-8<br />
http_code: 200<br />
http_connect: 000<br />
local_ip: 10.x.x.x<br />
local_port: 54964<br />
num_connects: 2<br />
num_redirects: 2<br />
redirect_url: <br />
remote_ip: 45.56.73.83<br />
remote_port: 80<br />
size_download: 66120<br />
size_header: 1132<br />
size_upload: 0<br />
speed_download: 91268.000<br />
speed_upload: 0.000<br />
ssl_verify_result: 0<br />
time_connect: 0.000<br />
time_namelookup: 0.000<br />
time_redirect: 0.466<br />
time_starttransfer: 0.167<br />
time_total: 0.724<br />
url_effective: http://wiki.christophchamp.com/index.php/Curriculum_Vitae<br />
</pre><br />
<br />
Note how the "<code>redirect_url</code>", "<code>url_effective</code>", "<code>remote_ip</code>", "<code>num_redirects</code>", etc. have changed.<br />
<br />
==See also==<br />
*[[Curl/manual|cURL manual]] &mdash; by the Haxx Team<br />
*[[Rackspace API]] &mdash; contains lots of examples of how to use cURL with a RESTful API<br />
*[[wget]]<br />
*[[wput]]<br />
*[[rsync]]<br />
*[[axel]]<br />
*[http://prozilla.genesys.ro/ prozilla]<br />
<br />
==External links==<br />
*[http://curl.haxx.se/ cURL website]<br />
*[http://curl.haxx.se/docs/manpage.html cURL manpage]<br />
*[http://us3.php.net/curl PHP using cURL method]<br />
<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Bash&diff=8271Bash2023-04-19T23:01:17Z<p>Christoph: /* shopt */</p>
<hr />
<div>{{lowercase|title=bash}}<br />
<br />
'''Bash''' is a [[Linux]] command shell written for the GNU project. Its name is an acronym for '''''B'''ourne-'''a'''gain '''sh'''ell''&mdash;a pun on the Bourne shell (sh), which was an early, important Unix shell.<br />
<br />
Bash is the default shell on most Linux systems and was previously my favourite shell. After nearly 14 years of using the bash shell as my primary shell, I switched to [[zsh|Z shell]] (zsh). It is awesome and extremely powerful!<br />
<br />
see: [[Bash/scripts]] for examples<br />
<br />
== Bash builtins ==<br />
A shell builtin is a command or a function, called from a shell, that is executed directly in the shell itself, instead of an external executable program which the shell would load and execute.<br />
<br />
Shell builtins work significantly faster than external programs, because there is no program loading overhead. However, their code is inherently present in the shell, and thus modifying or updating them requires modifications to the shell. Therefore shell builtins are usually used for simple, almost trivial, functions, such as text output. Because of the nature of Linux, some functions of the operating system have to be implemented as shell builtins. The most notable example is cd, which changes the working directory of the shell. Because each executable program runs in a separate process, and working directories are specific to each process, loading cd as an external program would not affect the working directory of the shell that loaded it.<br />
<br />
bash, :, ., [, alias, bg, bind, break, builtin, cd, command, compgen,<br />
complete, continue, declare, dirs, disown, echo, enable, eval, exec,<br />
exit, export, fc, fg, getopts, hash, help, history, jobs, kill, let,<br />
local, logout, popd, printf, pushd, pwd, read, readonly, return, set,<br />
shift, shopt, source, suspend, test, times, trap, type, typeset,<br />
ulimit, umask, unalias, unset, wait<br />
<br />
== Bash shell shortcuts ==<br />
<br />
=== CTRL Key Bound ===<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="4" bgcolor="#EFEFEF" | '''Basic commands'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Command<br />
!Description<br />
|- align="left"<br />
|'''Ctrl + a''' || Jump to the start of the line<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + b''' || Move back a char<br />
|- align="left"<br />
|'''Ctrl + c''' || Terminate the command<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + d''' || Delete from under the cursor<br />
|- align="left"<br />
|'''Ctrl + e''' || Jump to the end of the line<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + f''' || Move forward a char<br />
|- align="left"<br />
|'''Ctrl + h''' || Backspace<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + k''' || Delete to EOL<br />
|- align="left"<br />
|'''Ctrl + l''' || Clear the screen<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + n''' || Next command line (useful for "scrolling" with Ctrl + p)<br />
|- align="left"<br />
|'''Ctrl + p''' || Previous command line<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + r''' || Search the history backwards<br />
|- align="left"<br />
|'''Ctrl + R''' || Search the history backwards with multi occurrence<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + u''' || Delete backward from cursor<br />
|- align="left"<br />
|'''Ctrl + xx''' || Move between EOL and current cursor position<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + x @''' || Show possible hostname completions<br />
|- align="left"<br />
|'''Ctrl + w''' || deletes the token left of the cursor<br />
|--bgcolor="#eeeeee"<br />
|'''Ctrl + z''' || Suspend / Stop the command<br />
|- align="left"<br />
|'''Ctrl + /''' || Undo last command-line edit<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
=== ALT Key Bound ===<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="4" bgcolor="#EFEFEF" | '''Basic commands'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Command<br />
!Description<br />
|- align="left"<br />
|'''Alt + <''' || Move to the first line in the history<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + >''' || Move to the last line in the history<br />
|- align="left"<br />
|'''Alt + ?''' || Show current completion list<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + *''' || Insert all possible completions<br />
|- align="left"<br />
|'''Alt + /''' || Attempt to complete filename<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + .''' || Yank last argument to previous command<br />
|- align="left"<br />
|'''Alt + b''' || Move backward<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + c''' || Capitalize the word<br />
|- align="left"<br />
|'''Alt + d''' || Delete word<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + f''' || Move forward<br />
|- align="left"<br />
|'''Alt + l''' || Make word lowercase<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + n''' || Search the history forwards non-incremental<br />
|- align="left"<br />
|'''Alt + p''' || Search the history backwards non-incremental<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + r''' || Recall command<br />
|- align="left"<br />
|'''Alt + t''' || Move words around<br />
|--bgcolor="#eeeeee"<br />
|'''Alt + u''' || Make word uppercase<br />
|- align="left"<br />
|'''Alt + back-space''' || Delete backward from cursor<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
=== Case Transformation ===<br />
<br />
; <code>Esc C</code> : Converts the character under the cursor to upper case.<br />
; <code>Esc U</code> : Converts the text from the cursor to the end of the word to uppercase.<br />
; <code>Esc L</code> : Converts the text from the cursor to the end of the word to lowercase.<br />
<br />
=== More Special Keybindings ===<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="4" bgcolor="#EFEFEF" | '''Basic commands'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Command<br />
!Description<br />
|- align="left"<br />
|'''$ 2T''' || All available commands(common)<br />
|--bgcolor="#eeeeee"<br />
|'''$ (string)2T''' || All available commands starting with (string)<br />
|- align="left"<br />
|'''$ /2T''' || Entire directory structure including Hidden one<br />
|--bgcolor="#eeeeee"<br />
|'''$ 2T''' || Only Sub Dirs inside including Hidden one<br />
|- align="left"<br />
|'''$ *2T''' || Only Sub Dirs inside without Hidden one<br />
|--bgcolor="#eeeeee"<br />
|'''$ ~2T''' || All Present Users on system from "<tt>/etc/passwd</tt>"<br />
|- align="left"<br />
|'''$ $2T''' || All Sys variables<br />
|--bgcolor="#eeeeee"<br />
|'''$ @2T''' || Entries from "<tt>/etc/hosts</tt>"<br />
|- align="left"<br />
|'''$ =2T''' || Output like <tt>ls</tt> or <tt>dir</tt><br />
|}<br />
<div style="float:center">''Note: Here "2T" means Press TAB twice''</div><br />
</div><br />
<br clear="all"/><br />
<br />
===Bash Bang (!) commands===<br />
<br />
Re-run all or part of a previous command.<br />
<br />
!! Run the last command again<br />
!foo Run the most recent command that starts with 'foo' (e.g. !ls)<br />
!foo:p Print out the command that !foo would run<br />
also add it to the command history<br />
!$ Run the last word of the previous command (same as Alt + .)<br />
!$:p Print out the word that !$ would substitute<br />
!* Run the previous command except for the last word<br />
!*:p Print out the previous command except for the last word<br />
^foo^bar Run the previous command replacing foo with bar<br />
<br />
== Bash syntax highlights ==<br />
Bash's command syntax is a superset of the Bourne shell's command syntax. The definitive specification of Bash's command syntax is the [http://www.gnu.org/software/bash/manual/bashref.html Bash Reference Manual] distributed by the GNU project. This section highlights some of Bash's unique syntax features.<br />
<br />
The vast majority of Bourne shell scripts can be executed without alteration by Bash, with the exception of those Bourne shell scripts that happen to reference a Bourne special variable or to use a Bourne builtin command. The Bash command syntax includes ideas drawn from the [[Korn shell]] (ksh) and the [[C shell]] (csh), such as command-line editing, command history, the directory stack, the <tt>$RANDOM</tt> and <tt>$PPID</tt> variables, and [[POSIX]] command substitution syntax: <tt>$(...)</tt>. When being used as an interactive command shell, Bash supports completion of partly typed-in program names, filenames, variable names, etc. when the user presses the TAB key.<br />
<br />
Bash syntax has many extensions that the Bourne shell lacks. Several of those extensions are enumerated here.<br />
<br />
===Integer mathematics===<br />
A major limitation of the Bourne shell is that it cannot perform integer calculations without spawning an external process. Bash can perform in-process integer calculations using the <tt>((...))</tt> command and the <tt>$[...]</tt> variable syntax, as follows:<br />
<br />
VAR=55 # Assign integer 55 to variable VAR.<br />
((VAR = VAR + 1)) # Add one to variable VAR. Note the absence of the '$' character.<br />
((++VAR)) # Another way to add one to VAR. Performs C-style pre-increment.<br />
((VAR++)) # Another way to add one to VAR. Performs C-style post-increment.<br />
echo $[VAR * 22] # Multiply VAR by 22 and substitute the result into the command.<br />
echo $((VAR * 22)) # Another way to do the above.<br />
<br />
The <tt>((...))</tt> command can also be used in conditional statements, because its [[exit status]] is 0 or 1 depending on whether the condition is true or false:<br />
<br />
if ((VAR == Y * 3 + X * 2)); then<br />
echo Yes<br />
fi<br />
<br />
((Z > 23)) && echo Yes<br />
<br />
The <tt>((...))</tt> command supports the following [[relational operator]]s: '<tt>==</tt>', '<tt>!=</tt>', '<tt>&gt;</tt>', '<tt>&lt;</tt>', '<tt>&gt;=</tt>', and '<tt>&lt;=</tt>'.<br />
<br />
Bash cannot perform in-process [[floating point]] calculations. The only Unix command shells capable of this are [[Korn Shell]] (1993 version) and [[zsh]] (starting at version 4.0).<br />
<br />
===I/O redirection===<br />
Bash has several I/O [[Redirection (Unix)|redirection]] syntaxes that the traditional Bourne shell lacks. Bash can redirect [[standard output]] and [[Standard streams|standard error]] at the same time using this syntax:<br />
<br />
command &> file<br />
<br />
which is simpler to type than the equivalent Bourne shell syntax, "<tt>command > file 2>&1</tt>". Bash, since version 2.05b, can redirect standard input from a string using the following syntax (sometimes called "here strings"):<br />
<br />
command <<< "string to be read as standard input"<br />
<br />
If the string contains [[whitespace]], it must be quoted. <br />
<br />
'''Example''':<br />
Redirect standard output to a file, write data, close file, reset stdout<br />
<br />
# make Filedescriptor(FD) 6 a copy of stdout (FD 1)<br />
exec 6>&1<br />
# open file "test.data" for writing<br />
exec 1>test.data<br />
# produce some content<br />
echo "data:data:data"<br />
# close file "test.data"<br />
exec 1>&-<br />
# make stdout a copy of FD 6 (reset stdout)<br />
exec 1>&6<br />
# close FD6<br />
exec 6>&-<br />
<br />
Open and close files<br />
<br />
# open file test.data for reading<br />
exec 6<test.data<br />
# read until end of file<br />
while read -u 6 dta<br />
do<br />
echo "$dta" <br />
done<br />
# close file test.data<br />
exec 6<&-<br />
<br />
Catch output of external commands<br />
<br />
# execute 'find' and store results in VAR<br />
# search for filenames which end with the letter "h"<br />
VAR=$(find . -name "*h")<br />
<br />
====EOF====<br />
The <code>`cat <<EOF`</code> Bash syntax is very useful when one needs to work with multi-line strings in Bash (e.g., when passing a multi-line string to a variable, file, or a piped command).<br />
<br />
* Pass a multiline string to a variable:<br />
<br />
$ sql=$(cat <<EOF<br />
SELECT foo, bar FROM db<br />
WHERE foo='baz'<br />
EOF<br />
)<br />
<br />
The <code>$sql</code> variable now holds newlines as well, you can check it with <code>`echo -e "$sql"`</code>:<br />
SELECT foo, bar FROM db WHERE foo='baz'<br />
<br />
* Pass a multiline string to a file:<br />
<br />
$ cat <<EOF > print.sh<br />
#!/bin/bash<br />
echo \$PWD<br />
echo $PWD<br />
EOF<br />
<br />
The print.sh file now contains:<br />
<br />
#!/bin/bash<br />
echo $PWD<br />
echo /home/user<br />
<br />
* Pass a multiline string to a command/pipe:<br />
<br />
$ cat <<EOF | grep 'b' | tee b.txt | grep 'r'<br />
foo<br />
bar<br />
baz<br />
EOF<br />
<br />
This creates <code>b.txt</code> file with both <code>bar</code> and <code>baz</code> lines but prints only the <code>bar</code>.<br />
<br />
===In-process regular expressions===<br />
Bash 3.0 supports in-process [[regular expression]] matching using the following syntax, reminiscent of [[Perl]]:<br />
<br />
<nowiki>[[ string =~ regex ]]</nowiki><br />
<br />
The regular expression syntax is the same as that documented by the regex(3) [[man page]]. The exit status of the above command is 0 if the regex matches the string, 1 if it does not match. Parenthesized subexpressions in the regular expression can be accessed using the shell variable <tt>BASH_REMATCH</tt>, as follows:<br />
<br />
if <nowiki>[[ abcfoobarbletch =~ 'foo(bar)bl(.*)' ]]</nowiki>; then<br />
echo The regex matches!<br />
echo $BASH_REMATCH -- outputs: foobarbletch<br />
echo ${BASH_REMATCH[1]} -- outputs: bar<br />
echo ${BASH_REMATCH[2]} -- outputs: etch<br />
fi<br />
<br />
This syntax gives performance superior to spawning a separate process to run a <tt>[[grep]]</tt> command, because the regular expression matching takes place within the Bash process. If the regular expression or the string contain whitespace or shell [[metacharacter]]s (such as '<tt>*</tt>' or '<tt>?</tt>'), they should be quoted.<br />
<br />
===Backslash escapes===<br />
Words of the form <tt>$'string'</tt> are treated specially. The word expands to <tt>string</tt>, with backslash-escaped characters replaced as specified by the [[C programming language]]. Backslash escape sequences, if present, are decoded as follows:<br />
<br />
{| class="wikitable"<br />
|+ <big>Backslash Escapes</big><br />
|- <br />
! Backslash<br>Escape !! Expands To ...<br />
|-<br />
| align="center" | <tt>\a</tt> || An alert (bell) character<br />
|-<br />
| align="center" | <tt>\b</tt> || A backspace character<br />
|-<br />
| align="center" | <tt>\e</tt> || An escape character<br />
|-<br />
| align="center" | <tt>\f</tt> || A form feed character<br />
|-<br />
| align="center" | <tt>\n</tt> || A new line character<br />
|-<br />
| align="center" | <tt>\r</tt> || A carriage return character<br />
|-<br />
| align="center" | <tt>\t</tt> || A horizontal tab character<br />
|-<br />
| align="center" | <tt>\v</tt> || A vertical tab character<br />
|-<br />
| align="center" | <tt>\\</tt> || A backslash character<br />
|-<br />
| align="center" | <tt>\'</tt> || A single quote character<br />
|-<br />
| align="center" | <tt>\nnn</tt> || The eight-bit character whose value is the octal value nnn (one to three digits)<br />
|-<br />
| align="center" | <tt>\xHH</tt> || The eight-bit character whose value is the hexadecimal value HH (one or two hex digits)<br />
|-<br />
| align="center" | <tt>\cx</tt> || A control-X character<br />
|}<br />
<br />
The expanded result is single-quoted, as if the dollar sign had not been present.<br />
<br />
A double-quoted string preceded by a dollar sign (<tt>$"..."</tt>) will cause the string to be translated according to the current locale. If the current locale is C or POSIX, the dollar sign is ignored. If the string is translated and replaced, the replacement is double-quoted.<br />
<br />
The full list is as follows:<br />
<pre><br />
# \a an ASCII bell character (07)<br />
# \d the date in "Weekday Month Date" format (e.g., "Tue May 26")<br />
# \D{format} the format is passed to strftime(3) and the result is inserted into the prompt string;<br />
# \e an ASCII escape character (033)<br />
# \h the hostname up to the first `.'<br />
# \H the hostname<br />
# \j the number of jobs currently managed by the shell<br />
# \l the basename of the shell's terminal device name<br />
# \n newline<br />
# \r carriage return<br />
# \s the name of the shell, the basename of $0 (the portion following the final slash)<br />
# \t the current time in 24-hour HH:MM:SS format<br />
# \T the current time in 12-hour HH:MM:SS format<br />
# \@ the current time in 12-hour am/pm format<br />
# \A the current time in 24-hour HH:MM format<br />
# \u the username of the current user<br />
# \v the version of bash (e.g., 2.00)<br />
# \V the release of bash, version + patchelvel (e.g., 2.00.0)<br />
# \w the current working directory<br />
# \W the basename of the current working directory<br />
# \! the history number of this command<br />
# \# the command number of this command<br />
# \$ if the effective UID is 0, a #, otherwise a $<br />
# \nnn the character corresponding to the octal number nnn<br />
# \\ a backslash<br />
# \[ begin a sequence of non-printing characters, which could be used to embed a terminal control sequence into the prompt<br />
# \] end a sequence of non-printing characters<br />
</pre><br />
<br />
===Variables===<br />
When variables are used they are referred to with the <code>$</code> symbol in front of them. There are several useful variables available in the shell program. Here are a few:<br />
<br />
*<code>$$</code> = The PID number of the process executing the shell.<br />
*<code>$?</code> = Exit status variable.<br />
*<code>$0</code> = The name of the command you used to call a program.<br />
*<code>$1</code> = The first argument on the command line.<br />
*<code>$2</code> = The second argument on the command line.<br />
*<code>$n</code> = The nth argument on the command line.<br />
*<code>$*</code> = All the arguments on the command line.<br />
*<code>$#</code> = The number of command line arguments. <br />
<br />
The "shift" command can be used to shift command line arguments to the left, i.e., <code>$1</code> becomes the value of <code>$2</code>, <code>$3 </code>shifts into <code>$2</code>, etc. The command, "shift 2" will shift 2 places meaning the new value of <code>$1</code> will be the old value of <code>$3</code> and so forth.<br />
<br />
* Print specific characters stored in a variable:<br />
# The syntax is ${variable:start:length}. Omitting "length" value gives rest of string.<br />
$ val="hello"<br />
$ echo ${val:0:1} # Print out the first character of $val ("h" in this example)<br />
<br />
===Tests===<br />
There is a function provided by bash called test which returns a true or false value depending on the result of the tested expression (see: [[wikipedia:test (Unix)]] for more details). Its syntax is:<br />
test expression<br />
It can also be implied as follows:<br />
[ expression ]<br />
<br />
The tests below are test conditions provided by the shell:<br />
*<code>-b file</code> = True if the file exists and is block special file.<br />
*<code>-c file</code> = True if the file exists and is character special file.<br />
*<code>-d file</code> = True if the file exists and is a directory.<br />
*<code>-e file</code> = True if the file exists.<br />
*<code>-f file</code> = True if the file exists and is a regular file<br />
*<code>-g file</code> = True if the file exists and the set-group-id bit is set.<br />
*<code>-k file</code> = True if the files' "sticky" bit is set.<br />
*<code>-L file</code> = True if the file exists and is a symbolic link.<br />
*<code>-p file</code> = True if the file exists and is a named pipe.<br />
*<code>-r file</code> = True if the file exists and is readable.<br />
*<code>-s file</code> = True if the file exists and its size is greater than zero.<br />
*<code>-s file</code> = True if the file exists and is a socket.<br />
*<code>-t fd</code> = True if the file descriptor is opened on a terminal.<br />
*<code>-u file</code> = True if the file exists and its set-user-id bit is set.<br />
*<code>-w file</code> = True if the file exists and is writable.<br />
*<code>-x file</code> = True if the file exists and is executable.<br />
*<code>-O file</code> = True if the file exists and is owned by the effective user id.<br />
*<code>-G file</code> = True if the file exists and is owned by the effective group id.<br />
*<code>file1 -nt file2</code> = True if file1 is newer, by modification date, than file2.<br />
*<code>file1 -ot file2</code> = True if file1 is older than file2.<br />
*<code>file1 -ef file2</code> = True if file1 and file2 have the same device and inode numbers.<br />
*<code>-z string</code> = True if the length of the string is 0.<br />
*<code>-n string</code> = True if the length of the string is non-zero.<br />
*<code>string1 = string2</code> = True if the strings are equal.<br />
*<code>string1 != string2</code> = True if the strings are not equal.<br />
*<code>!expr</code> = True if the expr evaluates to false.<br />
*<code>expr1 -a expr2</code> = True if both expr1 and expr2 are true.<br />
*<code>expr1 -o expr2</code> = True if either expr1 or expr2 is true. <br />
<br />
The syntax is:<br />
arg1 OP arg2<br />
<br />
where OP is one of <code>-eq, -ne, -lt, -le, -gt, or -ge</code>. Arg1 and arg2 may be positive or negative integers or the special expression "<code>-l string</code>" which evaluates to the length of string.<br />
<br />
*Examples:<br />
if [ ! -e foo ]; then echo "NO FILE"; else cat foo; fi<br />
if [ -d "/home/bob" -a ! -d "/home/alice" ]; then echo "Bob exists, but not Alice"; fi<br />
<br />
===Colours in bash===<br />
Black 0;30 Dark Gray 1;30<br />
Blue 0;34 Light Blue 1;34<br />
Green 0;32 Light Green 1;32<br />
Cyan 0;36 Light Cyan 1;36<br />
Red 0;31 Light Red 1;31<br />
Purple 0;35 Light Purple 1;35<br />
Brown 0;33 Yellow 1;33<br />
Light Gray 0;37 White 1;37<br />
<br />
Here is an example borrowed from the Bash-Prompt-HOWTO:<br />
<br />
PS1="\[\033[1;34m\][\$(date +%H%M)][\u@\h:\w]$\[\033[0m\] " <br />
<br />
This turns the text blue, displays the time in brackets (very useful for not losing track of time while working), and displays the user name, host, and current directory enclosed in brackets. The "<code>\[\033[0m\]</code>" following the $ returns the colour to the previous foreground colour.<br />
<br />
*Another example:<br />
PS1="\[\033[1;30m\][\[\033[1;34m\]\u\[\033[1;30m\]@\[\033[0;35m\]\h\[\033[1;30m\]] \[\033[0;37m\]\W \[\033[1;30m\]\$\[\033[0m\] " <br />
yields:<br />
[user@host] directory $<br />
<br />
Break down:<br />
<br />
\[\033[1;30m\] - Sets the color for the characters that follow it.<br />
Here 1;30 will set them to Dark Gray.<br />
\u \h \W \$ - From the table above<br />
\[\033[0m\] - Sets the colours back to how they were originally.<br />
<br />
==Bash startup scripts==<br />
When Bash starts, it executes the commands in a variety of different scripts.<br />
<br />
When Bash is invoked as an interactive login shell, or as a non-interactive shell with the <tt>--login</tt> option, it first reads and executes commands from the file <tt>/etc/profile</tt>, if that file exists. After reading that file, it looks for <tt>~/.bash_profile</tt>, <tt>~/.bash_login</tt>, and <tt>~/.profile</tt>, in that order, and reads and executes commands from the first one that exists and is readable. The <tt>--noprofile</tt> option may be used when the shell is started to inhibit this behavior.<br />
<br />
When a login shell exits, Bash reads and executes commands from the file <tt>~/.bash_logout</tt>, if it exists.<br />
<br />
When an interactive shell that is not a login shell is started, Bash reads and executes commands from <tt>~/.bashrc</tt>, if that file exists. This may be inhibited by using the <tt>--norc</tt> option. The <tt>--rcfile file</tt> option will force Bash to read and execute commands from <tt>file</tt> instead of <tt>~/.bashrc</tt>.<br />
<br />
When Bash is started non-interactively, to run a shell script, for example, it looks for the variable <tt>BASH_ENV</tt> in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following command were executed:<br />
<br />
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi<br />
<br />
but the value of the <tt>PATH</tt> variable is not used to search for the file name.<br />
<br />
If Bash is invoked with the name <tt>sh</tt>, it tries to mimic the startup behavior of historical versions of <tt>sh</tt> as closely as possible, while conforming to the [[POSIX]] standard as well. When invoked as an interactive login shell, or a non-interactive shell with the <tt>--login</tt> option, it first attempts to read and execute commands from <tt>/etc/profile</tt> and <tt>~/.profile</tt>, in that order. The <tt>--noprofile</tt> option may be used to inhibit this behavior. When invoked as an interactive shell with the name <tt>sh</tt>, Bash looks for the variable <tt>ENV</tt>, expands its value if it is defined, and uses the expanded value as the name of a file to read and execute. Since a shell invoked as <tt>sh</tt> does not attempt to read and execute commands from any other startup files, the <tt>--rcfile</tt> option has no effect. A non-interactive shell invoked with the name <tt>sh</tt> does not attempt to read any other startup files. When invoked as <tt>sh</tt>, Bash enters ''posix'' mode after the startup files are read.<br />
<br />
When Bash is started in posix mode, as with the <tt>--posix</tt> command line option, it follows the POSIX standard for startup files. In this mode, interactive shells expand the <tt>ENV</tt> variable and commands are read and executed from the file whose name is the expanded value. No other startup files are read.<br />
<br />
Bash attempts to determine when it is being run by the remote shell daemon, usually <tt>rshd</tt>. If Bash determines it is being run by <tt>rshd</tt>, it reads and executes commands from <tt>~/.bashrc</tt>, if that file exists and is readable. It will not do this if invoked as <tt>sh</tt>. The <tt>--norc</tt> option may be used to inhibit this behavior, and the <tt>--rcfile</tt> option may be used to force another file to be read, but <tt>rshd</tt> does not generally invoke the shell with those options or allow them to be specified.<br />
<br />
===Environment variables===<br />
<br />
;<code>$CDPATH</code> : does for the cd built-in what PATH does for executables. By setting this wisely, you can cut down on the number of key-strokes you enter per day.<br />
::Example<br />
$ export CDPATH=.:~:~/docs:~/src:~/src/ops/docs:/mnt:/usr/src/redhat:/usr/src/redhat/RPMS:/usr/src:/usr/lib:/usr/local:/software:/software/redhat<br />
::Using this, cd i386 would likely take you to /usr/src/redhat/RPMS/i386 on a Red Hat Linux system. Make sure that you do include . in the list or you'll find that you can't change to directories relative to your current one without prefixing them with ./ <br />
;<code>$HISTIGNORE</code> : Set this to to avoid having consecutive duplicate commands and other not so useful information appended to the history list. This will cut down on hitting the up arrow endlessly to get to the command before the one you just entered twenty times. It will also avoid filling a large percentage of your history list with useless commands.<br />
::Example<br />
$ export HISTIGNORE="&:ls:ls *:mutt:[bf]g:exit"<br />
::Using this, consecutive duplicate commands, invocations of ls, executions of the mutt mail client without any additional parameters, plus calls to the bg, fg and exit built-ins will not be appended to the history list. <br />
;<code>$MAILPATH</code> : bash will warn you of new mail in any folder appended to MAILPATH. This is very handy if you use a tool like procmail to presort your e-mail into folders.<br />
::Try adding the following to your ~/.bash_profile to be notified when any new mail is deposited in any mailbox under ~/Mail.<br />
MAILPATH=/var/spool/mail/$USER<br />
for i in ~/Mail/[^.]*<br />
do<br />
MAILPATH=$MAILPATH:$i'?You have new mail in your ${_##*/} folder'<br />
done<br />
export MAILPATH<br />
unset i<br />
<br />
;<code>$TMOUT</code> : If you set this to a value greater than zero, bash will terminate after this number of seconds have elapsed if no input arrives. <br />
<br />
==shopt==<br />
Check your shell options by issuing the following command:<br />
shopt # see: help shopt<br />
A typical output should look something like the following:<br />
<pre><br />
cdable_vars off<br />
cdspell off<br />
checkhash off<br />
checkwinsize off<br />
cmdhist on<br />
dotglob off<br />
execfail off<br />
expand_aliases on<br />
extglob off<br />
histreedit off<br />
histappend off<br />
histverify off<br />
hostcomplete on<br />
huponexit off<br />
interactive_comments on<br />
lithist off<br />
login_shell on<br />
mailwarn off<br />
no_empty_cmd_completion off<br />
nocaseglob off<br />
nullglob off<br />
progcomp on<br />
promptvars on<br />
restricted_shell off<br />
shift_verbose off<br />
sourcepath on<br />
xpg_echo off<br />
</pre><br />
<br />
where each of the above mean (see <code>man bash</code> or <code>help set</code> for more info.):<br />
<br />
; cdable_vars : an argument to the cd builtin command that is not a directory is assumed to be the name of a variable dir to change to.<br />
; cdspell : minor errors in the spelling of a directory component in a cd command will be corrected. <br />
; checkhash : bash checks that a command found in the hash table exists before execute it. If no longer exists, a path search is performed.<br />
; checkwinsize : bash checks the window size after each command and, if necessary, updates the values of LINES and COLUMNS.<br />
; cmdhist : bash attempts to save all lines of a multiple-line command in the same history entry. Allows re-editing of multi-line commands.<br />
; dotglob : bash includes filenames beginning with a `.' in the results of pathname expansion.<br />
; execfail : a non-int shell will not exit if it cannot execute the file specified as an argument to the exec builtin command, like int sh.<br />
; expand_aliases : aliases are expanded as described above under ALIASES. This option is enabled by default for interactive shells.<br />
; extglob : the extended pattern matching features described above under Pathname Expansion are enabled.<br />
; histappend : the history list is appended to the file named by the value of the HISTFILE variable when shell exits, no overwriting the file.<br />
; hostcomplete : and readline is being used, bash will attempt to perform hostname completion when a word containing a @ is being completed<br />
; huponexit : bash will send SIGHUP to all jobs when an interactive login shell exits.<br />
; interactive_comments : allow a word beginning with # to cause that word and all remaining characters on that line to be ignored in an interactive shell<br />
; lithist : if cmdhist option is enabled, multi-line commands are saved to the history with embedded newlines rather than using semicolon<br />
; login_shell : shell sets this option if it is started as a login shell (see INVOCATION above). The value may not be changed.<br />
; mailwarn : file that bash is checking for mail has been accessed since the last checked, ``The mail in mailfile has been read'' is displayed.<br />
; no_empty_cmd_completion : bash will not attempt to search the PATH for possible completions when completion is attempted on an empty line.<br />
; nocaseglob : bash matches filenames in a case-insensitive fashion when performing pathname expansion (see Pathname Expansion above).<br />
; nullglob : bash allows patterns which match no files (see Pathname Expansion above) to expand to a null string, rather than themselves.<br />
; progcomp : the programmable completion facilities (see Programmable Completion above) are enabled. This option is enabled by default.<br />
; promptvars : prompt strings undergo variable and parameter expansion after being expanded as described in PROMPTING above. <br />
; shift_verbose : the shift builtin prints an error message when the shift count exceeds the number of positional parameters.<br />
; sourcepath : the source (.) builtin uses the value of PATH to find the directory containing the file supplied as an argument.<br />
; xpg_echo : the echo builtin expands backslash-escape sequences by default.<br />
<br />
==Manual pages==<br />
<br />
A '''man page''' (short for '''manual page''') is a form of software documentation usually found on a Unix or Unix-like operating system. Topics covered include computer programs (including library and system calls), formal standards and conventions, and even abstract concepts. A user may invoke a man page by issuing the man command.<br />
<br />
The manual is generally split into eight numbered sections, organized as follows<br />
{| class="wikitable"<br />
! Section<br />
! Description<br />
|-<br />
| 1<br />
| General commands<br />
|-<br />
| 2<br />
| System calls<br />
|-<br />
| 3<br />
| Library functions, covering, in particular, the C standard library<br />
|-<br />
| 4<br />
| Special files (usually devices, those found in <code>/dev</code>) and drivers<br />
|-<br />
| 5<br />
| File formats and conventions<br />
|-<br />
| 6<br />
| Games and screensavers<br />
|-<br />
| 7<br />
| Miscellaneous<br />
|-<br />
| 8<br />
| System administration commands and daemons<br />
|}<br />
<br clear="all"/><br />
<br />
==Bash (Shellshock) vulnerability==<br />
* Run the following command from a Bash shell:<br />
$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"<br />
<br />
If you see the following output, you are '''VULNERABLE''':<br />
<br />
<div style="padding: 1em; margin: 10px; border: 2px solid #f00;"><br />
vulnerable<br />
bash: BASH_FUNC_x(): line 0: syntax error near unexpected token `)'<br />
bash: BASH_FUNC_x(): line 0: `BASH_FUNC_x() () { :;}; echo vulnerable'<br />
bash: error importing function definition for `BASH_FUNC_x'<br />
test<br />
</div><br />
<br />
If you see the following output, you are ''NOT VULNERABLE'':<br />
<div style="padding: 1em; margin: 10px; border: 2px solid #0f0;"><br />
bash: warning: x: ignoring function definition attempt<br />
bash: error importing function definition for `BASH_FUNC_x'<br />
test<br />
</div><br />
<br />
If you are vulnerable, make sure you update Bash to the latest version your Linux distribution has to offer. If you still see the same vulnerability after updating from a repository, you should probably down the the latest [https://ftp.gnu.org/gnu/bash/ source code] of Bash and compile it on your own. Do '''''not''''' take this bug lightly!<br />
<br />
==Christoph's Additions==<br />
<pre><br />
PS1='\[\033[0;31m\][\u@christophchamp]\[\033[00m\]:\[\033[0;33m\]`pwd`\[\033[00m\]> '<br />
<br />
[christoph@christophchamp]:/home/christoph><br />
</pre><br />
<br />
==References==<br />
* [[wikipedia:Bash]]<br />
* [[wikibooks:Bourne Shell Scripting]]<br />
<br />
==External links==<br />
*[http://www.gnu.org/software/bash/bash.html Bash home page]<br />
*[ftp://ftp.cwru.edu/pub/bash/FAQ Bash FAQ]<br />
*[http://groups.google.com/groups?dq=&lr=&ie=UTF-8&group=gnu.announce&selm=mailman.1865.1091019304.1960.info-gnu%40gnu.org Bash 3.0 Announcement]<br />
*[http://www.network-theory.co.uk/bash/manual/ The GNU Bash Reference Manual], ([http://www.network-theory.co.uk/docs/bashref/ HTML version]) by Chet Ramey and Brian Fox, ISBN 0954161777<br />
===Bash guides from the Linux Documentation Project===<br />
*[http://www.tldp.org/LDP/Bash-Beginners-Guide/html/ Bash Guide for Beginners]<br />
*[http://www.tldp.org/HOWTO/Bash-Prog-Intro-HOWTO.html BASH Programming - Introduction HOW-TO]<br />
*[http://en.tldp.org/LDP/abs/html/ Advanced Bash-Scripting Guide] &mdash; An in-depth exploration of the art of shell scripting<br />
*[http://tldp.org/LDP/GNU-Linux-Tools-Summary/html/text-manipulation-tools.html Text manipulation tools] &mdash; from GNU/Linux Command-Line Tools Summary<br />
===Other guides and tutorials===<br />
*[http://wooledge.org:8000/BashPitfalls Bash Pitfalls] / [http://wooledge.org:8000/BashFAQ BashFAQ]<br />
*[http://www.cyberciti.biz/nixcraft/linux/docs/uniqlinuxfeatures/lsst/ Linux Shell Scripting Tutorial - A Beginner's handbook]<br />
*[http://www.linux.ie/newusers/beginners-linux-guide/shells.php About Shells]<br />
*[http://hypexr.homelinux.org/bash_tutorial.html Beginners Bash Tutorial]<br />
*[http://deadman.org/bash.html Advancing in the Bash Shell tutorial]<br />
*[http://www.vias.org/linux-knowhow/bbg_intro_10.html Linux Know-How] including the Bash Guide for Beginners<br />
*[http://www.bashscripts.org/ BashScripts.org]<!--<br />
*[http://www-128.ibm.com/developerworks/library/l-bash.html Bash by example - Part 1]<br />
*[http://www.ibm.com/developerworks/library/l-bash2.html Bash by example - Part 2]<br />
*[http://www.ibm.com/developerworks/library/l-bash3.html Bash by example - Part 3]--><br />
*[http://www.faqs.org/docs/linux_intro/x7003.html Common features]<br />
*[http://www-128.ibm.com/developerworks/aix/library/au-satbash.html?ca=dgr-lnxw97bash Get the most out of bash]<br />
<br />
===Custom .bashrc or .bash_profile===<br />
*[http://static.askapache.com/askapache-bash-profile.txt askapache-bash-profile.txt]<br />
<br />
[[Category:Scripting languages]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=RPM_Package_Manager&diff=8270RPM Package Manager2023-04-19T22:36:03Z<p>Christoph: /* RPM vs. APT */</p>
<hr />
<div>'''RPM Package Manager''' (originally Red Hat Package Manager, abbreviated '''RPM''') is a package management system.<br />
<br />
==Usage==<br />
NOTE: RPM file names normally have the following format:<br />
<name>-<version>-<release>.<arch>.rpm<br />
<br />
===Examples===<br />
*Install/upgrade packages verbosely:<br />
rpm -Uvh package_name(s)<br />
*Install/upgrade packages verbosely ''ignoring'' dependencies and/or conflicts. (''Note: Only do this if you really know what you are doing; the warnings are there for a reason!'')<br />
rpm -Uvh --force package_name(s):<br />
*Return information about a package that is already installed:<br />
rpm -qi package_name<br />
*Return information about a package that is ''not'' necessarily installed:<br />
rpm -qpi package_name<br />
*Return the ChangeLog of the installed package:<br />
rpm -q --changelog package_name<br />
*List the absolute paths of all files installed from a given package:<br />
rpm -ql package_name<br />
*Return <code>glibc-version</code> (if installed):<br />
rpm -qa |grep glibc<br />
*Returns <code>glibc-version</code>:<br />
rpm -q --whatprovides /lib/libc.so.6<br />
*Example of what packages [[Python]] requires:<br />
rpm -q --whatrequires python<br />
lib64xml2-python-2.6.27-3mdv2007.1<br />
tkinter-2.5-4mdv2007.1<br />
python-imaging-1.1.4-11mdv2007.1<br />
python-numpy-1.0.1-2mdv2007.1<br />
python-numeric-24.2-4mdv2007.1<br />
lib64python2.5-devel-2.5-4.1mdv2007.1<br />
lib64python2.5-devel-2.5-4.1mdv2007.1<br />
*Get a list of all packages currently installed:<br />
rpm -qa --queryformat='%{NAME} %{ARCH}\n' | sort | uniq > pkgs.txt<br />
<br />
You can alter the output from a query using "querytags". To find out which querytags are available, execute the following command:<br />
rpm --querytags<br />
You can then display selected information on that package(s) in question. For an example, if you only wish to display the names of all packages having "auto" in them (i.e., not version, release, arch, etc.), execute the following command<br />
rpm -qa --qf '%{NAME}\n'|grep auto<br />
This is useful if you want to compare installed packages on two machines with different versions (and/or architectures).<br />
<br />
Source code may also be distributed in RPM packages. Such package labels do not have an architecture part and replace it with "src". E.g.:<br />
libgnomeuimm2.0-2.0.0-3.src.rpm<br />
<br />
An SRPM is an RPM package with source code. Unlike a [[Tar|tarball]] (or an RPM), an SRPM package can be automatically compiled and installed, following instructions in the .spec file included in the SRPM.<br />
<br />
==Recompile with -fPIC==<br />
see: [http://www.gentoo.org/proj/en/base/amd64/howtos/index.xml?part=1&chap=3 HOWTO fix -fPIC errors] for some background.<br />
Some 64-bit packages require the source to be compiled with the <code>-fPIC</code> option. For this example, I will be using LAPACK (64-bit version).<br />
<br />
*Step #1: Download the source rpm (e.g., <code>lapack.src.rpm</code>)<br />
*Step #2: Install/unpack the package<br />
rpm -i lapack.src.rpm<br />
*Step #3: Add the <code>-fPIC</code> flag to the .SPEC file<br />
vi /usr/src/rpm/SPECS/lapack.spec<br />
%define optflags ... -fPIC -DPIC ...<br />
*Step #4: Check that the option was compiled into the libary:<br />
nm /usr/lib64/liblapack.a|more<br />
<br />
That should take care of errors that look something like this:<br />
/usr/lib64/liblapack.a: relocation R_X86_64_32 against `a local symbol' can not <br />
be used when making a shared object; recompile with -fPIC<br />
<br />
==RPM vs. APT==<br />
Show the equivalent commands in RedHat-based vs. Debian-based distros.<br />
<pre><br />
Feature rpm deb<br />
-----------------------------------------------------------------------------------------------------------------<br />
View all installed packages rpm -qa dpkg -l, dpkg-query -Wf '${Package}\n'<br />
dpkg --get-selections<br />
View files in an installed package rpm -ql ${PKG} dpkg -L ${PKG}<br />
View files in a package file rpm -qlp ./${PKG}.rpm dpkg -c ./${PKG}.deb<br />
View package info, installed package rpm -qi ${PKG} (1) apt-cache show ${PKG}<br />
dpkg -s ${PKG}<br />
View package info, package file rpm -qip ./${PKG}.rpm (1) dpkg -I ./${PKG}.deb<br />
View pre/post install shell scripts rpm -q --scripts ${PKG} cat /var/lib/dpkg/info/${PKG}.{pre,post}{inst,rm}<br />
View changelog for a package file rpm -qp --changelog ./${PKG}.rpm dpkg-deb --fsys-tarfile ${PKG}.deb |\<br />
tar -O -xvf - ./usr/share/doc/${PKG}/changelog.gz | gunzip<br />
Install a package file rpm -ivh ./${PKG}.rpm dpkg -i<br />
Uninstall a package rpm -e ${PKG} apt-get remove/purge ${PKG}<br />
dpkg -r/dpkg -P<br />
Upgrade a package from a file rpm -Uvh ./${PKG}.rpm dpkg -i ${PKG}.deb<br />
Find which package owns a file rpm -qif /path/to/file dpkg -S /path/to/file<br />
Find which package provides a file rpm -q --whatprovides /path/to/file dpkg-query -S /path/to/file<br />
List dependencies of a package rpm -q --requires ${PKG} apt-cache depends package<br />
List dependencies of a package file rpm -qp --requires ./${PKG}.rpm (shown in package's info)<br />
List reverse dependencies of package apt-cache rdepends ${PKG}<br />
Verify installed package files rpm -qV debsums<br />
against MD5 checksums<br />
Query database for data rpm --queryformat dpkg-query -s ${PKG}<br />
List files in package dpkg-query -L ${PKG}<br />
Find which package provides a rpm provides htpasswd<br />
command<br />
</pre><br />
<br />
; Find what package a file belongs to in Debian/Ubuntu<br />
<br />
<pre><br />
$ apt-file search filename<br />
#~OR~<br />
$ apt-file search /path/to/file<br />
</pre><br />
<br />
* Install and update <code>apt-file</code>:<br />
<pre><br />
$ sudo apt install -y apt-file<br />
$ sudo apt-file update<br />
</pre><br />
<br />
* Prevent package(s) from being upgraded:<br />
<pre><br />
$ sudo apt-mark docker-ce docker-ce-cli<br />
$ sudo apt-mark showhold<br />
docker-ce<br />
docker-ce-cli<br />
</pre><br />
<br />
* Reconfigure your current shell:<br />
<pre><br />
$ sudo dpkg-reconfigure console-setup<br />
</pre><br />
<br />
* Get current OS architecture:<br />
<pre><br />
$ dpkg --print-architecture<br />
amd64<br />
</pre><br />
<br />
See also:<br />
*[http://www.jpsdomain.org/linux/apt.html APT and RPM Packager Lookup Tables]<br />
*[https://help.ubuntu.com/community/SwitchingToUbuntu/FromLinux/RedHatEnterpriseLinuxAndFedora SwitchingToUbuntu/FromLinux/RedHatEnterpriseLinuxAndFedora]<br />
<br />
==See also==<br />
*[[rpmbuild]]<br />
*[[urpmi]]<br />
*[[wikipedia:Yellow dog Updater, Modified|yum]]<br />
*[[wikipedia:Yet Another Setup Tool|YaST]]<br />
<br />
==External links==<br />
*[http://fedora.redhat.com/docs/drafts/rpm-guide-en/index.html Red Hat RPM Guide] from the Fedora project.<br />
*Fox, Pennington, Red Hat (2003): Fedora Project Developer's Guide: [http://fedora.redhat.com/participate/developers-guide/ch-rpm-building.html Chapter 4. Building RPM Packages]<br />
*[http://www.rpm.org/ RPM Package Manager homepage]<br />
*[http://www.redhatmagazine.com/2007/02/08/the-story-of-rpm/ The story of RPM] by Matt Frye in [http://www.redhatmagazine.com/ Red Hat Magazine]<br />
*[http://www.hut.fi/~tkarvine/rpm-build-as-user.html RPM Building as a User]<br />
*Bailey, Edward C. (2000): [http://www.rpm.org/max-rpm/ Maximum RPM], an outdated but popular rpm reference<br />
*Bailey, Edward C. (2000): [http://rpm.org/max-rpm-snapshot/ Maximum RPM], actualized Maximum RPM edition<br />
*[http://developer.novell.com/wiki/index.php?title=SUSE_Package_Conventions SUSE Package Conventions]<br />
*[http://www.linuxbase.org/spec/refspecs/LSB_1.3.0/gLSB/gLSB/swinstall.html Package File Format - Linux Standards Base]<br />
*[http://lwn.net/Articles/214255/ RPM -- plans, goals, etc. ] &mdash; Fedora announcement about RPM.<br />
*[http://wiki.rpm.org RPM.org's wiki]<br />
*[[wikipedia:RPM Package Manager]]<br />
<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Kubernetes&diff=8269Kubernetes2023-04-14T16:55:18Z<p>Christoph: /* Release history */</p>
<hr />
<div>'''Kubernetes''' (also known by its numeronym '''k8s''') is an open source container cluster manager. Kubernetes' primary goal is to provide a platform for automating deployment, scaling, and operations of application containers across a cluster of hosts. Kubernetes was released by Google on July 2015.<br />
<br />
* Get the latest stable release of k8s with:<br />
$ curl -sSL <nowiki>https://dl.k8s.io/release/stable.txt</nowiki><br />
<br />
==Release history==<br />
<br />
NOTE: There is no such thing as Kubernetes Long-Term-Support (LTS). There is a new "minor" release ''roughly'' every 3 months (note: changed to ''roughly'' every 4 months in 2020).<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="3" bgcolor="#EFEFEF" | '''Kubernetes release history'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Release<br />
!Date<br />
!Cadence (days)<br />
|- align="left"<br />
|1.0 || 2015-07-10 ||align="right"|<br />
|--bgcolor="#eeeeee"<br />
|1.1 || 2015-11-09 ||align="right"| 122<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.2.md 1.2] || 2016-03-16 ||align="right"| 128<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.3.md 1.3] || 2016-07-01 ||align="right"| 107<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.4.md 1.4] || 2016-09-26 ||align="right"| 87<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.5.md 1.5] || 2016-12-12 ||align="right"| 77<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.6.md 1.6] || 2017-03-28 ||align="right"| 106<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.7.md 1.7] || 2017-06-30 ||align="right"| 94<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.8.md 1.8] || 2017-09-28 ||align="right"| 90<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.9.md 1.9] || 2017-12-15 ||align="right"| 78<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md 1.10] || 2018-03-26 ||align="right"| 101<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md 1.11] || 2018-06-27 ||align="right"| 93<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.12.md 1.12] || 2018-09-27 ||align="right"| 92<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.13.md 1.13] || 2018-12-03 ||align="right"| 67<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.14.md 1.14] || 2019-03-25 ||align="right"| 112<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md 1.15] || 2019-06-17 ||align="right"| 84<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md 1.16] || 2019-09-18 ||align="right"| 93<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md 1.17] || 2019-12-09 ||align="right"| 82<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md 1.18] || 2020-03-25 ||align="right"| 107<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md 1.19] || 2020-08-26 ||align="right"| 154<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md 1.20] || 2020-12-08 ||align="right"| 104<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md 1.21] || 2021-04-08 ||align="right"| 121<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md 1.22] || 2021-08-04 ||align="right"| 118<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md 1.23] || 2021-12-07 ||align="right"| 125<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md 1.24] || 2022-05-03 ||align="right"| 147<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md 1.25] || 2022-08-23 ||align="right"| 112<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md 1.26] || 2023-01-18 ||align="right"| 148<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md 1.27] || 2023-04-11 ||align="right"| 83<br />
|}<br />
</div><br />
<br clear="all"/><br />
See: [https://gravitational.com/blog/kubernetes-release-cycle The full-time job of keeping up with Kubernetes]<br />
<br />
==Providers and installers==<br />
<br />
* Vanilla Kubernetes<br />
* AWS:<br />
** Managed: EKS<br />
** Kops<br />
** Kube-AWS<br />
** Kismatic<br />
** Kubicorn<br />
** Stack Point Cloud<br />
* Google:<br />
** Managed: GKE<br />
** [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
** Stack Point Cloud<br />
** Typhoon<br />
* Azure AKS<br />
* Ubuntu UKS<br />
* VMware PKS<br />
* [[Rancher|Rancher RKE]]<br />
* CoreOS Tectonic<br />
<br />
==Design overview==<br />
Kubernetes is built through the definition of a set of components (building blocks or "primitives") which, when used collectively, provide a method for the deployment, maintenance, and scalability of container-based application clusters.<br />
<br />
These "primitives" are designed to be ''loosely coupled'' (i.e., where little to no knowledge of the other component definitions is needed to use) as well as easily extensible through an API. Both the internal components of Kubernetes as well as the extensions and containers make use of this API.<br />
<br />
==Components==<br />
The building blocks of Kubernetes are the following (note that these are also referred to as Kubernetes "Objects" or "API Primitives"):<br />
<br />
;Cluster : A cluster is a set of machines (physical or virtual) on which your applications are managed and run. All machines are managed as a cluster (or set of clusters, depending on the topology used).<br />
;Nodes (minions) : You can think of these as "container clients". These are the individual hosts (physical or virtual) that Docker is installed on and hosts the various containers within your managed cluster.<br />
: Each node will run etcd (a key-pair management and communication service, used by Kubernetes for exchanging messages and reporting on cluster status) as well as the Kubernetes Proxy.<br />
;Pods : A pod consists of one or more containers. Those containers are guaranteed (by the cluster controller) to be located on the same host machine (aka "co-located") in order to facilitate sharing of resources. For an example, it makes sense to have database processes and data containers as close as possible. In fact, they really should be in the same pod.<br />
: Pods "work together", as in a multi-tiered application configuration. Each set of pods that define and implement a service (e.g., MySQL or Apache) are defined by the label selector (see below).<br />
: Pods are assigned unique IPs within each cluster. These allow an application to use ports without having to worry about conflicting port utilization.<br />
: Pods can contain definitions of disk volumes or shares, and then provide access from those to all the members (containers) within the pod.<br />
: Finally, pod management is done through the API or delegated to a controller.<br />
;Labels : Clients can attach key-value pairs to any object in the system (e.g., Pods or Nodes). These become the labels that identify them in the configuration and management of them. The key-value pairs can be used to filter, organize, and perform mass operations on a set of resources.<br />
;Selectors : Label Selectors represent queries that are made against those labels. They resolve to the corresponding matching objects. A Selector expression matches labels to filter certain resources. For example, you may want to search for all pods that belong to a certain service, or find all containers that have a specific tier Label value as "database". Labels and Selectors are inherently two sides of the same coin. You can use Labels to classify resources and use Selectors to find them and use them for certain actions.<br />
: These two items are the primary way that grouping is done in Kubernetes and determine which components that a given operation applies to when indicated.<br />
;Controllers : These are used in the management of your cluster. Controllers are the mechanism by which your desired configuration state is enforced.<br />
: Controllers manage a set of pods and, depending on the desired configuration state, may engage other controllers to handle replication and scaling (Replication Controller) of X number of containers and pods across the cluster. It is also responsible for replacing any container in a pod that fails (based on the desired state of the cluster).<br />
: Replication Controllers (RC) are a subset of Controllers and are an abstraction used to manage pod lifecycles. One of the key uses of RCs is to maintain a certain number of running Pods (e.g., for scaling or ensuring that at least one Pod is running at all times, etc.). It is considered a "best practice" to use RCs to define Pod lifecycles, rather than creating Pods directly.<br />
: Other controllers that can be engaged include a ''DaemonSet Controller'' (enforces a 1-to-1 ratio of pods to Worker Nodes) and a ''Job Controller'' (that runs pods to "completion", such as in batch jobs).<br />
: Each set of pods any controller manages, is determined by the label selectors that are part of its definition.<br />
;Replica Sets: These define how many replicas of each Pod will be running. They also monitor and ensure the required number of Pods are running, replacing Pods that die. Replica Sets can act as replacements for Replication Controllers.<br />
;Services : A Service is an abstraction on top of Pods, which provides a single IP address and DNS name by which the Pods can be accessed. This load balancing configuration is much easier to manage and helps scale Pods seamlessly.<br />
: Kubernetes can then provide service discovery and handle routing with the static IP for each pod as well as load balancing (round-robin based) connections to that service among the pods that match the label selector indicated.<br />
: By default, although a service is only exposed inside a cluster, it can also be exposed outside a cluster, as needed.<br />
;Volumes : A Volume is a directory with data, which is accessible to a container. The volume co-terminates with the Pods that encloses it.<br />
;Name : A name by which a resource is identified.<br />
;Namespace : A Namespace provides additional qualification to a resource name. This is especially helpful when multiple teams/projects are using the same cluster and there is a potential for name collision. You can think of a Namespace as a virtual wall between multiple clusters.<br />
;Annotations : An Annotation is a Label, but with much larger data capacity. Typically, this data is not readable by humans and is not easy to filter through. Annotation is useful only for storing data that may not be searched, but is required by the resource (e.g., storing strong keys, etc.).<br />
;Control Pane<br />
;API<br />
<br />
===Pods===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/ Pod]'' is the smallest and simplest Kubernetes object. It is the unit of deployment in Kubernetes, which represents a single instance of the application. A Pod is a logical collection of one or more containers, which:<br />
<br />
* are scheduled together on the same host;<br />
* share the same network namespace; and<br />
* mount the same external storage (Volumes).<br />
<br />
Pods are ephemeral in nature, and they do not have the capability to self-heal by themselves. That is why we use them with controllers, which can handle a Pod's replication, fault tolerance, self-heal, etc. Examples of controllers are ''Deployments'', ''ReplicaSets'', ''ReplicationControllers'', etc. We attach the Pod's specification to other objects using Pod Templates (see below).<br />
<br />
===Labels===<br />
Labels are key-value pairs that can be attached to any Kubernetes object (e.g. ''Pods''). Labels are used to organize and select a subset of objects, based on the requirements in place. Many objects can have the same label(s). Labels do not provide uniqueness to objects. <br />
<br />
===Label Selectors===<br />
With Label Selectors, we can select a subset of objects. Kubernetes supports two types of Selectors:<br />
<br />
;Equality-Based Selectors : Equality-Based Selectors allow filtering of objects based on label keys and values. With this type of Selector, we can use the <code>=</code>, <code>==</code>, or <code>!=</code> operators. For example, with <code>env==dev</code>, we are selecting the objects where the "<code>env</code>" label is set to "<code>dev</code>".<br />
;Set-Based Selectors : Set-Based Selectors allow filtering of objects based on a set of values. With this type of Selector, we can use the <code>in</code>, <code>notin</code>, and <code>exist</code> operators. For example, with <code>env in (dev,qa)</code>, we are selecting objects where the "<code>env</code>" label is set to "<code>dev</code>" or "<code>qa</code>".<br />
<br />
===Replication Controllers===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/ ReplicationController]'' (rc) is a controller that is part of the Master Node's Controller Manager. It makes sure the specified number of replicas for a Pod is running at any given point in time. If there are more Pods than the desired count, the ReplicationController would kill the extra Pods, and, if there are less Pods, then the ReplicationController would create more Pods to match the desired count. Generally, we do not deploy a Pod independently, as it would not be able to re-start itself if something goes wrong. We always use controllers like ReplicationController to create and manage Pods.<br />
<br />
===Replica Sets===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ ReplicaSet]'' (rs) is the next-generation ReplicationController. ReplicaSets support both equality- and set-based Selectors, whereas ReplicationControllers only support equality-based Selectors. As of January 2018, this is the only difference.<br />
<br />
As an example, say you create a ReplicaSet where you defined a "desired replicas = 3" (and set "<code>current==desired</code>"), any time "<code>current!=desired</code>" (i.e., one of the Pods dies) the ReplicaSet will detect that the current state is no longer matching the desired state. So, in our given scenario, the ReplicaSet will create one more Pod, thus ensuring that the current state matches the desired state.<br />
<br />
ReplicaSets can be used independently, but they are mostly used by Deployments to orchestrate the Pod creation, deletion, and updates. A Deployment automatically creates the ReplicaSets, and we do not have to worry about managing them.<br />
<br />
===Deployments===<br />
''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' objects provide declarative updates to Pods and ReplicaSets. The DeploymentController is part of the Master Node's Controller Manager, and it makes sure that the current state always matches the desired state.<br />
<br />
As an example, let's say we have a Deployment which creates a "ReplicaSet A". ReplicaSet A then creates 3 Pods. In each Pod, one of the containers uses the <code>nginx:1.7.9</code> image.<br />
<br />
Now, in the Deployment, we change the Pod's template and we update the image for the Nginx container from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>. As we have modified the Pod's template, a new "ReplicaSet B" gets created. This process is referred to as a "Deployment rollout". (A rollout is only triggered when we update the Pod's template for a deployment. Operations like scaling the deployment do not trigger the deployment.) Once ReplicaSet B is ready, the Deployment starts pointing to it.<br />
<br />
On top of ReplicaSets, Deployments provide features like Deployment recording, with which, if something goes wrong, we can rollback to a previously known state.<br />
<br />
===Namespaces===<br />
If we have numerous users whom we would like to organize into teams/projects, we can partition the Kubernetes cluster into sub-clusters using ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces]''. The names of the resources/objects created inside a Namespace are unique, but not across Namespaces.<br />
<br />
To list all the Namespaces, we can run the following command:<br />
$ kubectl get namespaces<br />
NAME STATUS AGE<br />
default Active 2h<br />
kube-public Active 2h<br />
kube-system Active 2h<br />
<br />
Generally, Kubernetes creates two default namespaces: <code>kube-system</code> and <code>default</code>. The <code>kube-system</code> namespace contains the objects created by the Kubernetes system. The <code>default</code> namespace contains the objects which belong to any other namespace. By default, we connect to the <code>default</code> Namespace. <code>kube-public</code> is a special namespace, which is readable by all users and used for special purposes, like bootstrapping a cluster. <br />
<br />
Using ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ Resource Quotas]'', we can divide the cluster resources within Namespaces.<br />
<br />
===Component services===<br />
The component services running on a standard master/worker node(s) Kubernetes setup are as follows:<br />
* Kubernetes Master node(s)<br />
*; kube-apiserver : Exposes Kubernetes APIs<br />
*; kube-controller-manager : Runs controllers to handle nodes, endpoints, etc.<br />
*; kube-scheduler : Watches for new pods and assigns them nodes<br />
*; etcd : Distributed key-value store<br />
*; DNS : [optional] DNS for Kubernetes services<br />
* Worker node(s)<br />
*; kubelet : Manages pods on a node, volumes, secrets, creating new containers, health checks, etc.<br />
*; kube-proxy : Maintains network rules, port forwarding, etc.<br />
<br />
==Setup a Kubernetes cluster==<br />
<br />
<div style="margin: 10px; padding: 5px; border: 2px solid red;">'''IMPORTANT''': The following is how to setup Kubernetes 1.2 that is, as of January 2018, a very old version. I will update this article with how to setup k8s using a much newer version (v1.9) when I have time.<br />
</div><br />
<br />
In this section, I will show you how to setup a Kubernetes cluster with etcd and Docker. The cluster will consist of 1 master node and 3 worker nodes.<br />
<br />
===Setup VMs===<br />
<br />
For this demo, I will be creating 4 VMs via [[Vagrant]] (with VirtualBox).<br />
<br />
* Create Vagrant demo environment:<br />
$ mkdir $HOME/dev/kubernetes && cd $_<br />
<br />
* Create Vagrantfile with the following contents:<br />
<pre><br />
# -*- mode: ruby -*-<br />
# vi: set ft=ruby :<br />
<br />
require 'yaml'<br />
VAGRANTFILE_API_VERSION = "2"<br />
<br />
$common_script = <<COMMON_SCRIPT<br />
# Set verbose<br />
set -v<br />
# Set exit on error<br />
set -e<br />
echo -e "$(date) [INFO] Starting modified Vagrant..."<br />
sudo yum update -y<br />
# Timestamp provision<br />
date > /etc/vagrant_provisioned_at<br />
COMMON_SCRIPT<br />
<br />
unless defined? CONFIG<br />
configuration_file = File.join(File.dirname(__FILE__), 'vagrant_config.yml')<br />
CONFIG = YAML.load(File.open(configuration_file, File::RDONLY).read)<br />
end<br />
<br />
CONFIG['box'] = {} unless CONFIG.key?('box')<br />
<br />
def modifyvm_network(node)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--nicpromisc1", "allow-all"]<br />
#vbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]<br />
vbox.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]<br />
end<br />
end<br />
<br />
def modifyvm_resources(node, memory, cpus)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--memory", memory]<br />
vbox.customize ["modifyvm", :id, "--cpus", cpus]<br />
end<br />
end<br />
<br />
## START: Actual Vagrant process<br />
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|<br />
<br />
config.vm.box = CONFIG['box']['name']<br />
<br />
# Uncomment the following line if you wish to be able to pass files from<br />
# your local filesystem directly into the vagrant VM:<br />
#config.vm.synced_folder "data", "/vagrant"<br />
<br />
## VM: k8s master #############################################################<br />
config.vm.define "master" do |node|<br />
node.vm.hostname = "k8s.master.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
#node.vm.network "forwarded_port", guest: 80, host: 8080<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['master']<br />
<br />
# Uncomment the following if you wish to define CPU/memory:<br />
#node.vm.provider "virtualbox" do |vbox|<br />
# vbox.customize ["modifyvm", :id, "--memory", "4096"]<br />
# vbox.customize ["modifyvm", :id, "--cpus", "2"]<br />
#end<br />
#modifyvm_resources(node, "4096", "2")<br />
end<br />
## VM: k8s minion1 ############################################################<br />
config.vm.define "minion1" do |node|<br />
node.vm.hostname = "k8s.minion1.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion1']<br />
end<br />
## VM: k8s minion2 ############################################################<br />
config.vm.define "minion2" do |node|<br />
node.vm.hostname = "k8s.minion2.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion2']<br />
end<br />
## VM: k8s minion3 ############################################################<br />
config.vm.define "minion3" do |node|<br />
node.vm.hostname = "k8s.minion3.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion3']<br />
end<br />
###############################################################################<br />
<br />
end<br />
</pre><br />
<br />
The above Vagrantfile uses the following configuration file:<br />
$ cat vagrant_config.yml<br />
<pre><br />
---<br />
box:<br />
name: centos/7<br />
storage_controller: 'SATA Controller'<br />
debug: false<br />
development: false<br />
network:<br />
dns1: 8.8.8.8<br />
dns2: 8.8.4.4<br />
internal:<br />
network: 192.168.200.0/24<br />
external:<br />
start: 192.168.100.100<br />
end: 192.168.100.200<br />
network: 192.168.100.0/24<br />
bridge: wlan0<br />
netmask: 255.255.255.0<br />
broadcast: 192.168.100.255<br />
host_groups:<br />
master: 192.168.200.100<br />
minion1: 192.168.200.101<br />
minion2: 192.168.200.102<br />
minion3: 192.168.200.103<br />
</pre><br />
<br />
* In the Vagrant Kubernetes directory (i.e., <code>$HOME/dev/kubernetes</code>), run the following command:<br />
$ vagrant up<br />
<br />
===Setup hosts===<br />
''Note: Run the following commands/steps on all hosts (master and minions).''<br />
<br />
* Log into the k8s master host:<br />
$ vagrant ssh master<br />
<br />
* Kubernetes cluster<br />
$ cat << EOF >> /etc/hosts<br />
192.168.200.100 k8s.master.dev<br />
192.168.200.101 k8s.minion1.dev<br />
192.168.200.102 k8s.minion2.dev<br />
192.168.200.103 k8s.minion3.dev<br />
EOF<br />
<br />
* Install, enable, and start NTP:<br />
$ yum install -y ntp<br />
$ systemctl enable ntpd && systemctl start ntpd<br />
$ timedatectl<br />
<br />
* Disable any [[iptables|firewall rules]] (for now; we will add the rules back later):<br />
$ systemctl stop firewalld && systemctl disable firewalld<br />
$ systemctl stop iptables<br />
<br />
* Disable [[SELinux]] (for now; we will turn it on again later):<br />
$ setenforce 0<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/sysconfig/selinux<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
$ sestatus<br />
<br />
* Add the Docker repo and update yum:<br />
$ cat << EOF > /etc/yum.repos.d/virt7-docker-common-release.repo<br />
[virt7-docker-common-release]<br />
name=virr7-docker-common-release<br />
baseurl=<nowiki>http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/</nowiki><br />
gpgcheck=0<br />
EOF<br />
$ yum update<br />
<br />
* Install Docker, Kubernetes, and etcd:<br />
$ yum install -y --enablerepo=virt7-docker-common-release kubernetes docker etcd<br />
<br />
===Install and configure master controller===<br />
''Note: Run the following commands on only the master host.''<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/etcd/etcd.conf</code> and add (or make changes to) the following lines:<br />
[member]<br />
ETCD_LISTEN_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
[cluster]<br />
ETCD_ADVERTISE_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/apiserver</code> and add (or make changes to) the following lines:<br />
<pre><br />
# The address on the local server to listen to.<br />
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"<br />
KUBE_API_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port on the local server to listen on.<br />
KUBE_API_PORT="--port=8080"<br />
<br />
# Port minions listen on<br />
KUBELET_PORT="--kubelet-port=10250"<br />
<br />
# Comma separated list of nodes in the etcd cluster<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://127.0.0.1:2379</nowiki>"<br />
<br />
# Address range to use for services<br />
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"<br />
<br />
# default admission control policies<br />
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"<br />
<br />
# Add your own!<br />
KUBE_API_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following etcd and Kubernetes services:<br />
<br />
$ for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE <br />
done<br />
<br />
* Check on the status of the above services (the following command should report 4 running services):<br />
$ systemctl status etcd kube-apiserver kube-controller-manager kube-scheduler | grep "(running)" | wc -l # => 4<br />
<br />
* Check on the status of the Kubernetes API server:<br />
$ kubectl cluster-info<br />
Kubernetes master is running at <nowiki>http://localhost:8080</nowiki><br />
$ curl <nowiki>http://localhost:8080/version</nowiki><br />
#~OR~<br />
$ curl <nowiki>http://k8s.master.dev:8080/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "2",<br />
"gitVersion": "v1.2.0",<br />
"gitCommit": "ec7364b6e3b155e78086018aa644057edbe196e5",<br />
"gitTreeState": "clean"<br />
}<br />
</pre><br />
<br />
* Get a list of Kubernetes API paths:<br />
$ curl <nowiki>http://k8s.master.dev:8080/paths</nowiki><br />
<pre><br />
{<br />
"paths": [<br />
"/api",<br />
"/api/v1",<br />
"/apis",<br />
"/apis/autoscaling",<br />
"/apis/autoscaling/v1",<br />
"/apis/batch",<br />
"/apis/batch/v1",<br />
"/apis/extensions",<br />
"/apis/extensions/v1beta1",<br />
"/healthz",<br />
"/healthz/ping",<br />
"/logs/",<br />
"/metrics",<br />
"/resetMetrics",<br />
"/swagger-ui/",<br />
"/swaggerapi/",<br />
"/ui/",<br />
"/version"<br />
]<br />
}<br />
</pre><br />
<br />
* List all available paths (key-value stores) known to ectd:<br />
$ etcdctl ls / --recursive<br />
<br />
The master controller in a Kubernetes cluster must have the following services running to function as the master host in the cluster:<br />
* ntpd<br />
* etcd<br />
* kube-controller-manager<br />
* kube-apiserver<br />
* kube-scheduler<br />
<br />
Note: The Docker daemon should not be running on the master host.<br />
<br />
===Install and configure the minions===<br />
''Note: Run the following commands/steps on all minion hosts.''<br />
<br />
* Log into the k8s minion hosts:<br />
$ vagrant ssh minion1 # do the same for minion2 and minion3<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ECTD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/kubelet</code> and add (or make changes to) the following lines:<br />
<pre><br />
###<br />
# kubernetes kubelet (minion) config<br />
<br />
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)<br />
KUBELET_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port for the info server to serve on<br />
KUBELET_PORT="--port=10250"<br />
<br />
# You may leave this blank to use the actual hostname<br />
KUBELET_HOSTNAME="--hostname-override=k8s.minion1.dev" # ***CHANGE TO CORRECT MINION HOSTNAME***<br />
<br />
# location of the api-server<br />
KUBELET_API_SERVER="--api-servers=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
<br />
# pod infrastructure container<br />
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"<br />
<br />
# Add your own!<br />
KUBELET_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following services:<br />
$ for SERVICE in kube-proxy kubelet docker; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE<br />
done<br />
<br />
* Test that Docker is running and can start containers:<br />
$ docker info<br />
$ docker pull hello-world<br />
$ docker run hello-world<br />
<br />
Each minion in a Kubernetes cluster must have the following services running to function as a member of the cluster (i.e., a "Ready" node):<br />
* ntpd<br />
* kubelet<br />
* kube-proxy<br />
* docker<br />
<br />
===Kubectl: Exploring our environment===<br />
''Note: Run all of the following commands on the master host.''<br />
<br />
* Get a list of nodes with <code>kubectl</code>:<br />
$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 20m<br />
k8s.minion2.dev Ready 12m<br />
k8s.minion3.dev Ready 12m<br />
</pre><br />
<br />
* Describe nodes with <code>kubectl</code>:<br />
<br />
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'<br />
$ kubectl get nodes -o jsonpath='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' | tr ';' "\n"<br />
<pre><br />
k8s.minion1.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion2.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion3.dev:OutOfDisk=False<br />
Ready=True<br />
</pre><br />
<br />
* Get the man page for <code>kubectl</code>:<br />
$ man kubectl-get<br />
<br />
==Working with our Kubernetes cluster==<br />
<br />
''Note: The following section will be working from within the Kubernetes cluster we created above.''<br />
<br />
===Create and deploy pod definitions===<br />
<br />
* Turn off nodes 1 and 2:<br />
minion{1,2}$ systemctl stop kubelet kube-proxy<br />
<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 1h<br />
k8s.minion2.dev NotReady 37m<br />
k8s.minion3.dev NotReady 39m<br />
</pre><br />
<br />
* Check for any k8s Pods (there should be none):<br />
master$ kubectl get pods<br />
<br />
* Create a builds directory for our Pods:<br />
master$ mkdir builds && cd $_<br />
<br />
* Create a Pod running Nginx inside a Docker container:<br />
<pre><br />
master$ kubectl create -f - <<EOF<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
* Check on Pod creation status:<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 0/1 ContainerCreating 0 2s<br />
</pre><br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 1/1 Running 0 3m<br />
</pre><br />
<br />
minion1$ docker ps<br />
<pre><br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
a718c6c0355d nginx:1.7.9 "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx.4580025_nginx_default_699e...<br />
</pre><br />
<br />
master$ kubectl describe pod nginx<br />
<br />
master$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1<br />
busybox$ wget -qO- 172.17.0.2<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete pod nginx<br />
<br />
* Port forwarding:<br />
master$ kubectl create -f nginx.yml # see above for YAML<br />
master$ kubectl port-forward nginx :80 &<br />
I1020 23:12:29.478742 23394 portforward.go:213] Forwarding from [::1]:40065 -> 80<br />
master$ curl -I localhost:40065<br />
<br />
===Tags, labels, and selectors===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-pod-label.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-pod-label.yml<br />
master$ kubectl get pods -l app=nginx<br />
master$ kubectl describe pods -l app=nginx<br />
<br />
* Add labels or overwrite existing ones:<br />
master$ kubectl label pods nginx new-label=mynginx<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=nginx<br />
master$ kubectl label pods nginx new-label=foo<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=foo<br />
<br />
===Deployments===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-prod.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-prod<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-prod<br />
spec:<br />
containers:<br />
- name: nginx-deployment-prod<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create --validate -f nginx-deployment-dev.yml<br />
master$ kubectl create --validate -f nginx-deployment-prod.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 5m<br />
nginx-deployment-prod-3051195443-hj9b1 1/1 Running 0 12m<br />
</pre><br />
<br />
master$ kubectl describe deployments -l app=nginx-deployment-dev<br />
<pre><br />
Name: nginx-deployment-dev<br />
Namespace: default<br />
CreationTimestamp: Thu, 20 Oct 2016 23:48:46 +0000<br />
Labels: app=nginx-deployment-dev<br />
Selector: app=nginx-deployment-dev<br />
Replicas: 1 updated | 1 total | 1 available | 0 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-dev-2568522567 (1/1 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment-prod 1 1 1 1 44s<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev-update.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.8 # ***CHANGED***<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl apply -f nginx-deployment-dev-update.yml<br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 0/1 ContainerCreating 0 27s<br />
</pre><br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 6m<br />
</pre><br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment nginx-deployment-dev<br />
master$ kubectl delete deployment nginx-deployment-prod<br />
<br />
===Multi-Pod (container) replication controller===<br />
<br />
* Start the other two nodes (the ones we previously stopped):<br />
minion2$ systemctl start kubelet kube-proxy<br />
minion3$ systemctl start kubelet kube-proxy<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 2h<br />
k8s.minion2.dev Ready 2h<br />
k8s.minion3.dev Ready 2h<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-multi-node.yml<br />
---<br />
apiVersion: v1<br />
kind: ReplicationController<br />
metadata:<br />
name: nginx-www<br />
spec:<br />
replicas: 3<br />
selector:<br />
app: nginx<br />
template:<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-multi-node.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 0/1 ContainerCreating 0 10s<br />
nginx-www-416ct 0/1 ContainerCreating 0 10s<br />
nginx-www-ax41w 0/1 ContainerCreating 0 10s<br />
</pre><br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 1m<br />
nginx-www-416ct 1/1 Running 0 1m<br />
nginx-www-ax41w 1/1 Running 0 1m<br />
</pre><br />
<br />
master$ kubectl describe pods | awk '/^Node/{print $2}'<br />
<pre><br />
k8s.minion2.dev/192.168.200.102<br />
k8s.minion1.dev/192.168.200.101<br />
k8s.minion3.dev/192.168.200.103<br />
</pre><br />
<br />
minion1$ docker ps # 1 nginx container running<br />
minion2$ docker ps # 1 nginx container running<br />
minion3$ docker ps # 1 nginx container running<br />
minion3$ docker ps --format "<nowiki>{{.Image}}</nowiki>"<br />
<pre><br />
nginx<br />
gcr.io/google_containers/pause:2.0<br />
</pre><br />
<br />
master$ kubectl describe replicationcontroller<br />
<pre><br />
Name: nginx-www<br />
Namespace: default<br />
Image(s): nginx<br />
Selector: app=nginx<br />
Labels: app=nginx<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
...<br />
</pre><br />
<br />
* Attempt to delete one of the three pods:<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 11m<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
master$ kubectl delete pod nginx-www-2evxu<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-3cck4 1/1 Running 0 12s<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
<br />
A new pod (<code>nginx-www-3cck4</code>) automatically started up. This is because the expected state, as defined in our YAML file, is for there to be 3 pods running at all times. Thus, if one or more of the pods were to go down, a new pod (or pods) will automatically start up to bring the state back to the expected state.<br />
<br />
* To force-delete all pods:<br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Create and deploy service definitions===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-service.yml<br />
---<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: nginx-service<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
</pre><br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
</pre><br />
master$ kubectl create -f nginx-service.yml<br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
nginx-service 10.254.110.127 <none> 8000/TCP 10s<br />
</pre><br />
<br />
master$ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i<br />
busybox$ wget -qO- 10.254.110.127:8000 # works<br />
<br />
* Cleanup<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete service nginx-service<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-jh2e9 1/1 Running 0 13m<br />
nginx-www-jir2g 1/1 Running 0 13m<br />
nginx-www-w91uw 1/1 Running 0 13m<br />
</pre><br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Creating temporary Pods at the CLI===<br />
<br />
* Make sure we have no Pods running:<br />
master$ kubectl get pods<br />
<br />
* Create temporary deployment pod:<br />
master$ kubectl run mysample --image=foobar/apache<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
mysample-1424711890-fhtxb 0/1 ContainerCreating 0 1s<br />
</pre><br />
master$ kubectl get deployment <br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
mysample 1 1 1 0 7s<br />
</pre><br />
<br />
* Create a temporary deployment pod (where we know it will fail):<br />
master$ kubectl run myexample --image=christophchamp/ubuntu_sysadmin<br />
master$ kubectl -o wide get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myexample-3534121234-mpr35 0/1 CrashLoopBackOff 12 39m k8s.minion3.dev<br />
mysample-2812764540-74c5h 1/1 Running 0 41m k8s.minion2.dev<br />
</pre><br />
<br />
* Check on why the "myexample" pod is in status "CrashLoopBackOff":<br />
master$ kubectl describe pods/myexample-3534121234-mpr35<br />
master$ kubectl describe deployments/mysample<br />
master$ kubectl describe pods/mysample-2812764540-74c5h | awk '/^Node/{print $2}'<br />
k8s.minion2.dev/192.168.200.102<br />
<br />
master$ kubectl delete deployment mysample<br />
<br />
* Run multiple replicas of the same pod:<br />
master$ kubectl run myreplicas --image=latest123/apache --replicas=2 --labels=app=myapache,version=1.0.0<br />
master$ kubectl describe deployment myreplicas <br />
<pre><br />
Name: myreplicas<br />
Namespace: default<br />
CreationTimestamp: Fri, 21 Oct 2016 19:10:30 +0000<br />
Labels: app=myapache,version=1.0.0<br />
Selector: app=myapache,version=1.0.0<br />
Replicas: 2 updated | 2 total | 1 available | 1 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: myreplicas-2209834598 (2/2 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myreplicas-2209834598-5iyer 1/1 Running 0 1m k8s.minion1.dev<br />
myreplicas-2209834598-cslst 1/1 Running 0 1m k8s.minion2.dev<br />
</pre><br />
<br />
master$ kubectl describe pods -l version=1.0.0<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myreplicas<br />
<br />
===Interacting with Pod containers===<br />
<br />
* Create example Apache pod definition file:<br />
<pre><br />
master$ cat << EOF > apache.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: apache<br />
spec:<br />
containers:<br />
- name: apache<br />
image: latest123/apache<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl create -f apache.yml<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
<br />
* Test pod and make some basic configuration changes:<br />
master$ kubectl exec apache date<br />
master$ kubectl exec mypod -i -t -- cat /var/www/html/index.html # default apache HTML<br />
master$ kubectl exec apache -i -t -- /bin/bash<br />
container$ export TERM=xterm<br />
container$ echo "xtof test" > /var/www/html/index.html<br />
minion3$ curl 172.17.0.2<br />
xtof test<br />
container$ exit<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
Pod/container is still running even after we exited (as expected).<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Logs===<br />
<br />
* Start our example Apache pod to use for checking Kubernetes logging features:<br />
master$ kubectl create -f apache.yml <br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
apache 1/1 Running 0 9s<br />
</pre><br />
master$ kubectl logs apache<br />
<pre><br />
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message<br />
</pre><br />
master$ kubectl logs --tail=10 apache<br />
master$ kubectl logs --since=24h apache # or 10s, 2m, etc.<br />
master$ kubectl logs -f apache # follow the logs<br />
master$ kubectl logs -f -c apache apache # where -c is the container ID<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Autoscaling and scaling Pods===<br />
<br />
master$ kubectl run myautoscale --image=latest123/apache --port=80 --labels=app=myautoscale<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 47s k8s.minion3.dev<br />
</pre><br />
<br />
* Create an autoscale definition:<br />
master$ kubectl autoscale deployment myautoscale --min=2 --max=6 --cpu-percent=80<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 2 2 2 2 4m<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 3m k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4s k8s.minion2.dev<br />
</pre><br />
<br />
* Scale up an already autoscaled deployment:<br />
master$ kubectl scale --current-replicas=2 --replicas=4 deployment/myautoscale<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 4 4 4 4 8m<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-2rxhp 1/1 Running 0 8s k8s.minion1.dev<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 7m k8s.minion3.dev<br />
myautoscale-3243017378-ozxs8 1/1 Running 0 8s k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4m k8s.minion2.dev<br />
</pre><br />
<br />
* Scale down:<br />
master$ kubectl scale --current-replicas=4 --replicas=2 deployment/myautoscale<br />
<br />
Note: You can not scale down past the original minimum number of pods/containers specified in the original autoscale deployment (i.e., min=2 in our example).<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myautoscale<br />
<br />
===Failure and recovery===<br />
<br />
master$ kubectl run myrecovery --image=latest123/apache --port=80 --replicas=2 --labels=app=myrecovery<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myrecovery 2 2 2 2 6s<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-5xu8f 1/1 Running 0 12s k8s.minion1.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 12s k8s.minion2.dev<br />
</pre><br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the minions/nodes (so we have a total of 2 nodes online):<br />
minion1$ systemctl stop docker kubelet kube-proxy<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-qyi04 1/1 Running 0 7m k8s.minion3.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 14m k8s.minion2.dev<br />
</pre><br />
Pod switch from minion1 to minion3.<br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the remaining online minions/nodes (so we have a total of 1 node online):<br />
minion2$ systemctl stop docker kubelet kube-proxy<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-b5tim 1/1 Running 0 2m k8s.minion3.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 17m k8s.minion3.dev<br />
</pre><br />
Both Pods are now running on minion3, the only available node.<br />
<br />
* Start up Kubernetes- and Docker-related services again on minion1 and delete one of the Pods:<br />
minion1$ systemctl start docker kubelet kube-proxy<br />
master$ kubectl delete pod myrecovery-563119102-b5tim<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-8unzg 1/1 Running 0 1m k8s.minion1.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 20m k8s.minion3.dev<br />
</pre><br />
Pods are now running on separate nodes.<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployments/myrecovery<br />
<br />
==Minikube==<br />
[https://github.com/kubernetes/minikube Minikube] is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.<br />
<br />
* Install Minikube:<br />
$ curl -Lo minikube <nowiki>https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64</nowiki> \<br />
&& chmod +x minikube && sudo mv minikube /usr/local/bin/<br />
<br />
* Install kubectl<br />
$ curl -Lo kubectl <nowiki>https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl</nowiki> \<br />
&& chmod +x kubectl && sudo mv kubectl /usr/local/bin/<br />
<br />
* Test install<br />
$ minikube start<br />
#~OR~<br />
$ minikube start --memory 4096 # give it 4GB of RAM<br />
$ minikube status<br />
$ minikube dashboard<br />
$ kubectl config view<br />
$ kubectl cluster-info<br />
<br />
NOTE: If you have an old version of minikube installed, you should probably do the following before upgrading to a much newer version:<br />
$ minikube delete --all --purge<br />
<br />
Get the details on the CLI options for kubectl [https://kubernetes.io/docs/reference/kubectl/overview/ here].<br />
<br />
Using the <code>`kubectl proxy`</code> command, kubectl will authenticate with the API Server on the Master Node and would make the dashboard available on <nowiki>http://localhost:8001/ui</nowiki>:<br />
<br />
$ kubectl proxy<br />
Starting to serve on 127.0.0.1:8001<br />
<br />
After running the above command, we can access the dashboard at <code><nowiki>http://127.0.0.1:8001/ui</nowiki></code>.<br />
<br />
Once the kubectl proxy is configured, we can send requests to localhost on the proxy port:<br />
<br />
$ curl <nowiki>http://localhost:8001/</nowiki><br />
$ curl <nowiki>http://localhost:8001/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "8",<br />
"gitVersion": "v1.8.0",<br />
"gitCommit": "0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4",<br />
"gitTreeState": "clean",<br />
"buildDate": "2017-11-29T22:43:34Z",<br />
"goVersion": "go1.9.1",<br />
"compiler": "gc",<br />
"platform": "linux/amd64"<br />
}<br />
</pre><br />
<br />
Without kubectl proxy configured, we can get the Bearer Token using kubectl, and then send it with the API request. A Bearer Token is an access token which is generated by the authentication server (the API server on the Master Node) and given back to the client. Using that token, the client can connect back to the Kubernetes API server without providing further authentication details, and then, access resources.<br />
<br />
* Get the k8s token:<br />
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | awk '/^default/{print $1}') | awk '/^token/{print $2}')<br />
<br />
* Get the k8s API server endpoint:<br />
$ APISERVER=$(kubectl config view | awk '/https/{print $2}')<br />
<br />
* Access the API Server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}<br />
<br />
===Using Minikube as a local Docker registry===<br />
<br />
Sometimes it is useful to have a local Docker registry for Kubernetes to pull images from. As the Minikube [https://github.com/kubernetes/minikube/blob/0c616a6b42b28a1aab8397f5a9061f8ebbd9f3d9/README.md#reusing-the-docker-daemon README] describes, you can reuse the Docker daemon running within Minikube with <code>eval $(minikube docker-env)</code> to build and pull images from.<br />
<br />
To use an image without uploading it to some external resgistry (e.g., Docker Hub), you can follow these steps:<br />
* Set the environment variables with <code>eval $(minikube docker-env)</code><br />
* Build the image with the Docker daemon of Minikube (e.g., <code>docker build -t my-image .</code>)<br />
* Set the image in the pod spec like the build tag (e.g., <code>my-image</code>)<br />
* Set the <code>imagePullPolicy</code> to <code>Never</code>, otherwise Kubernetes will try to download the image.<br />
<br />
Important note: You have to run <code>eval $(minikube docker-env)</code> on each terminal you want to use since it only sets the environment variables for the current shell session.<br />
<br />
===Working with our Minikube-based Kubernetes cluster===<br />
<br />
;Kubernetes Object Model<br />
<br />
Kubernetes has a very rich object model, with which it represents different persistent entities in the Kubernetes cluster. Those entities describe:<br />
<br />
* What containerized applications we are running and on which node<br />
* Application resource consumption<br />
* Different policies attached to applications, like restart/upgrade policies, fault tolerance, etc.<br />
<br />
With each object, we declare our intent or desired state using the '''spec''' field. The Kubernetes system manages the '''status''' field for objects, in which it records the actual state of the object. At any given point in time, the Kubernetes Control Plane tries to match the object's actual state to the object's desired state.<br />
<br />
Examples of Kubernetes objects are Pods, Deployments, ReplicaSets, etc.<br />
<br />
To create an object, we need to provide the '''spec''' field to the Kubernetes API Server. The '''spec''' field describes the desired state, along with some basic information, like the name. The API request to create the object must have the '''spec''' field, as well as other details, in a JSON format. Most often, we provide an object's definition in a YAML file, which is converted by kubectl in a JSON payload and sent to the API Server.<br />
<br />
Below is an example of a ''Deployment'' object:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
With the '''apiVersion''' field in the example above, we mention the API endpoint on the API Server which we want to connect to. Note that you can see what API version to use with the following call to the API server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}/apis/apps<br />
Use the '''preferredVersion''' for most cases.<br />
<br />
With the '''kind''' field, we mention the object type &mdash; in our case, we have '''Deployment'''. With the '''metadata''' field, we attach the basic information to objects, like the name. Notice that in the above we have two '''spec''' fields ('''spec''' and '''spec.template.spec'''). With '''spec''', we define the desired state of the deployment. In our example, we want to make sure that, at any point in time, at least 3 ''Pods'' are running, which are created using the Pod template defined in '''spec.template'''. In '''spec.template.spec''', we define the desired state of the Pod (here, our Pod would be created using nginx:1.7.9).<br />
<br />
Once the object is created, the Kubernetes system attaches the '''status''' field to the object.<br />
<br />
;Connecting users to Pods<br />
<br />
To access the application, a user/client needs to connect to the Pods. As Pods are ephemeral in nature, resources like IP addresses allocated to it cannot be static. Pods could die abruptly or be rescheduled based on existing requirements.<br />
<br />
As an example, consider a scenario in which a user/client is connecting to a Pod using its IP address. Unexpectedly, the Pod to which the user/client is connected dies and a new Pod is created by the controller. The new Pod will have a new IP address, which will not be known automatically to the user/client of the earlier Pod. To overcome this situation, Kubernetes provides a higher-level abstraction called ''[https://kubernetes.io/docs/concepts/services-networking/service/ Service]'', which logically groups Pods and a policy to access them. This grouping is achieved via Labels and Selectors (see above).<br />
<br />
So, for our example, we would use Selectors (e.g., "<code>app==frontend</code>" and "<code>app==db</code>") to group our Pods into two logical groups. We can assign a name to the logical grouping, referred to as a "service name". In our example, we have created two Services, <code>frontend-svc</code> and <code>db-svc</code>, and they have the "<code>app==frontend</code>" and the "<code>app==db</code>" Selectors, respectively.<br />
<br />
The following is an example of a Service object:<br />
<pre><br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: frontend-svc<br />
spec:<br />
selector:<br />
app: frontend<br />
ports:<br />
- protocol: TCP<br />
port: 80<br />
targetPort: 5000<br />
</pre><br />
<br />
in which we are creating a <code>frontend-svc</code> Service by selecting all the Pods that have the Label "<code>app</code>" equal to "<code>frontend</code>". By default, each Service also gets an IP address, which is routable only inside the cluster. In our case, we have 172.17.0.4 and 172.17.0.5 IP addresses for our <code>frontend-svc</code> and <code>db-svc</code> Services, respectively. The IP address attached to each Service is also known as the ClusterIP for that Service.<br />
<br />
+------------------------------------+<br />
| select: app==frontend | container (app:frontend; 10.0.1.3)<br />
| service=frontend-svc (172.17.0.4) |------> container (app:frontend; 10.0.1.4)<br />
+------------------------------------+ container (app:frontend; 10.0.1.5)<br />
^<br />
/<br />
/<br />
user/client<br />
\<br />
\<br />
v<br />
+------------------------------------+<br />
| select: app==db |------> container (app:db; 10.0.1.10)<br />
| service=db-svc (172.17.0.5) |<br />
+------------------------------------+<br />
<br />
The user/client now connects to a Service via ''its'' IP address, which forwards the traffic to one of the Pods attached to it. A Service does the load balancing while selecting the Pods for forwarding the data/traffic.<br />
<br />
While forwarding the traffic from the Service, we can select the target port on the Pod. In our example, for <code>frontend-svc</code>, we will receive requests from the user/client on port 80. We will then forward these requests to one of the attached Pods on port 5000. If the target port is not defined explicitly, then traffic will be forwarded to Pods on the port on which the Service receives traffic.<br />
<br />
A tuple of Pods, IP addresses, along with the <code>targetPort</code> is referred to as a ''Service Endpoint''. In our case, <code>frontend-svc</code> has 3 Endpoints: <code>10.0.1.3:5000</code>, <code>10.0.1.4:5000</code>, and <code>10.0.1.5:5000</code>.<br />
<br />
===kube-proxy===<br />
All of the Worker Nodes run a daemon called kube-proxy, which watches the API Server on the Master Node for the addition and removal of Services and endpoints. For each new Service, on each node, kube-proxy configures the IPtables rules to capture the traffic for its ClusterIP and forwards it to one of the endpoints. When the Service is removed, kube-proxy removes the IPtables rules on all nodes as well.<br />
<br />
===Service discovery===<br />
As Services are the primary mode of communication in Kubernetes, we need a way to discover them at runtime. Kubernetes supports two methods of discovering a Service:<br />
<br />
;Environment Variables : As soon as the Pod starts on any Worker Node, the kubelet daemon running on that node adds a set of environment variables in the Pod for all active Services. For example, if we have an active Service called <code>redis-master</code>, which exposes port 6379, and its ClusterIP is 172.17.0.6, then, on a newly created Pod, we can see the following environment variables:<br />
<br />
REDIS_MASTER_SERVICE_HOST=172.17.0.6<br />
REDIS_MASTER_SERVICE_PORT=6379<br />
REDIS_MASTER_PORT=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp<br />
REDIS_MASTER_PORT_6379_TCP_PORT=6379<br />
REDIS_MASTER_PORT_6379_TCP_ADDR=172.17.0.6<br />
<br />
With this solution, we need to be careful while ordering our Services, as the Pods will not have the environment variables set for Services which are created after the Pods are created.<br />
<br />
;DNS : Kubernetes has an add-on for DNS, which creates a DNS record for each Service and its format is like <code>my-svc.my-namespace.svc.cluster.local</code>. Services within the same namespace can reach other services with just their name. For example, if we add a Service <code>redis-master</code> in the <code>my-ns</code> Namespace, then all the Pods in the same Namespace can reach to the redis Service just by using its name, <code>redis-master</code>. Pods from other Namespaces can reach the Service by adding the respective Namespace as a suffix, like <code>redis-master.my-ns</code>.<br />
: This is the most common and highly recommended solution. For example, in the previous section's image, we have seen that an internal DNS is configured, which maps our services <code>frontend-svc</code> and <code>db-svc</code> to 172.17.0.4 and 172.17.0.5, respectively.<br />
<br />
===Service Type===<br />
While defining a Service, we can also choose its access scope. We can decide whether the Service:<br />
<br />
* is only accessible within the cluster;<br />
* is accessible from within the cluster and the external world; or<br />
* maps to an external entity which resides outside the cluster.<br />
<br />
Access scope is decided by ''ServiceType'', which can be mentioned when creating the Service.<br />
<br />
;ClusterIP : (the default ''ServiceType''.) A Service gets its Virtual IP address using the ClusterIP. That IP address is used for communicating with the Service and is accessible only within the cluster. <br />
<br />
;NodePort : With this ''ServiceType'', in addition to creating a ClusterIP, a port from the range '''30000-32767''' is mapped to the respective service from all the Worker Nodes. For example, if the mapped NodePort is 32233 for the service <code>frontend-svc</code>, then, if we connect to any Worker Node on port 32233, the node would redirect all the traffic to the assigned ClusterIP (172.17.0.4).<br />
: By default, while exposing a NodePort, a random port is automatically selected by the Kubernetes Master from the port range '''30000-32767'''. If we do not want to assign a dynamic port value for NodePort, then, while creating the Service, we can also give a port number from the earlier specific range.<br />
: The NodePort ServiceType is useful when we want to make our services accessible from the external world. The end-user connects to the Worker Nodes on the specified port, which forwards the traffic to the applications running inside the cluster. To access the application from the external world, administrators can configure a reverse proxy outside the Kubernetes cluster and map the specific endpoint to the respective port on the Worker Nodes.<br />
<br />
;LoadBalancer: With this ''ServiceType'', we have the following:<br />
:* NodePort and ClusterIP Services are automatically created, and the external load balancer will route to them;<br />
:* The Services are exposed at a static port on each Worker Node; and<br />
:* The Service is exposed externally using the underlying Cloud provider's load balancer feature.<br />
: The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS.<br />
<br />
;ExternalIP : A Service can be mapped to an ExternalIP address if it can route to one or more of the Worker Nodes. Traffic that is ingressed into the cluster with the ExternalIP (as destination IP) on the Service port, gets routed to one of the the Service endpoints. (Note that ExternalIPs are not managed by Kubernetes. The cluster administrator(s) must have configured the routing to map the ExternalIP address to one of the nodes.)<br />
<br />
;ExternalName : a special ''ServiceType'', which has no Selectors and does not define any endpoints. When accessed within the cluster, it returns a CNAME record of an externally configured service.<br />
: The primary use case of this ServiceType is to make externally configured services like <code>my-database.example.com</code> available inside the cluster, using just the name, like <code>my-database</code>, to other services inside the same Namespace.<br />
<br />
===Deploying a application===<br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
EOF<br />
</pre><br />
<br />
$ kubectl get service<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h<br />
web-service NodePort 10.104.107.132 <none> 80:32610/TCP 7m<br />
<br />
Note that "<code>32610</code>" port.<br />
<br />
* Get the IP address of your Minikube k8s cluster<br />
$ minikube ip<br />
192.168.99.100<br />
#~OR~<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
* Now, check that your web service is serving up a default Nginx website:<br />
$ curl -I <nowiki>http://192.168.99.100:32610</nowiki><br />
HTTP/1.1 200 OK<br />
Server: nginx/1.13.8<br />
Date: Thu, 11 Jan 2018 00:27:51 GMT<br />
Content-Type: text/html<br />
Content-Length: 612<br />
Last-Modified: Wed, 10 Jan 2018 04:10:03 GMT<br />
Connection: keep-alive<br />
ETag: "5a55921b-264"<br />
Accept-Ranges: bytes<br />
<br />
Looks good!<br />
<br />
Finally, destroy the webserver deployment:<br />
$ kubectl delete deployments webserver<br />
<br />
===Using Ingress with Minikube===<br />
<br />
* First check that the Ingress add-on is enabled:<br />
$ minikube addons list | grep ingress<br />
- ingress: disabled<br />
<br />
If it is not, enable it with:<br />
$ minikube addons enable ingress<br />
$ minikube addons list | grep ingress<br />
- ingress: enabled<br />
<br />
* Create an Echo Server Deployment:<br />
<pre><br />
$ cat << EOF >deploy-echoserver.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: echoserver<br />
template:<br />
metadata:<br />
labels:<br />
run: echoserver<br />
spec:<br />
containers:<br />
- image: gcr.io/google_containers/echoserver:1.4<br />
imagePullPolicy: IfNotPresent<br />
name: echoserver<br />
ports:<br />
- containerPort: 8080<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-echoserver.yml<br />
<br />
* Create the Cheddar cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-cheddar-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: cheddar-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:cheddar<br />
imagePullPolicy: IfNotPresent<br />
name: cheddar-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-stilton-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: stilton-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:stilton<br />
imagePullPolicy: IfNotPresent<br />
name: stilton-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create the Echo Server Service:<br />
<pre><br />
$ cat << EOF >svc-echoserver.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 31116<br />
port: 8080<br />
protocol: TCP<br />
targetPort: 8080<br />
selector:<br />
run: echoserver<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-echoserver.yml<br />
<br />
* Create the Cheddar cheese Service:<br />
<pre><br />
$ cat << EOF >svc-cheddar-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 32467<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: cheddar-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
</pre><br />
$ kubectl create --validate -f svc-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Service:<br />
<pre><br />
$ cat << EOF >svc-stilton-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 30197<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: stilton-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-stilton-cheese.yml<br />
<br />
* Create the Ingress for the above Services:<br />
<pre><br />
$ cat << EOF >ingress-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: ingress-cheese<br />
annotations:<br />
nginx.ingress.kubernetes.io/rewrite-target: /<br />
spec:<br />
backend:<br />
serviceName: default-http-backend<br />
servicePort: 80<br />
rules:<br />
- host: myminikube.info<br />
http:<br />
paths:<br />
- path: /<br />
backend:<br />
serviceName: echoserver<br />
servicePort: 8080<br />
- host: cheeses.all<br />
http:<br />
paths:<br />
- path: /stilton<br />
backend:<br />
serviceName: stilton-cheese<br />
servicePort: 80<br />
- path: /cheddar<br />
backend:<br />
serviceName: cheddar-cheese<br />
servicePort: 80<br />
</pre><br />
$ kubectl create --validate -f ingress-cheese.yml<br />
<br />
* Check that everything is up:<br />
<pre><br />
$ kubectl get all<br />
NAME READY STATUS RESTARTS AGE<br />
pod/cheddar-cheese-d6d6587c7-4bgcz 1/1 Running 0 12m<br />
pod/echoserver-55f97d5bff-pdv65 1/1 Running 0 12m<br />
pod/stilton-cheese-6d64cbc79-g7h4w 1/1 Running 0 12m<br />
<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
service/cheddar-cheese NodePort 10.109.238.92 <none> 80:32467/TCP 12m<br />
service/echoserver NodePort 10.98.60.194 <none> 8080:31116/TCP 12m<br />
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h<br />
service/stilton-cheese NodePort 10.108.175.207 <none> 80:30197/TCP 12m<br />
<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
deployment.apps/cheddar-cheese 1 1 1 1 12m<br />
deployment.apps/echoserver 1 1 1 1 12m<br />
deployment.apps/stilton-cheese 1 1 1 1 12m<br />
<br />
NAME DESIRED CURRENT READY AGE<br />
replicaset.apps/cheddar-cheese-d6d6587c7 1 1 1 12m<br />
replicaset.apps/echoserver-55f97d5bff 1 1 1 12m<br />
replicaset.apps/stilton-cheese-6d64cbc79 1 1 1 12m<br />
<br />
$ kubectl get ing<br />
NAME HOSTS ADDRESS PORTS AGE<br />
ingress-cheese myminikube.info,cheeses.all 10.0.2.15 80 12m<br />
</pre><br />
<br />
* Add your host aliases:<br />
$ echo "$(minikube ip) myminikube.info cheeses.all" | sudo tee -a /etc/hosts<br />
<br />
* Now, either using your browser or [[curl]], check that you can reach all of the endpoints defined in the Ingress:<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/cheddar/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/stilton/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null myminikube.info # Should return '200'<br />
<br />
* You can also see the Nginx logs for the above requests with:<br />
$ kubectl --namespace kube-system logs \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller<br />
<br />
* You can also view the Nginx configuration file (and the settings created by the above Ingress) with:<br />
$ NGINX_POD=$(kubectl --namespace kube-system get pods \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller \<br />
--output jsonpath='{.items[0].metadata.name}')<br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- cat /etc/nginx/nginx.conf<br />
<br />
* Get the version of the Nginx Ingress controller installed:<br />
<pre><br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- /nginx-ingress-controller --version<br />
-------------------------------------------------------------------------------<br />
NGINX Ingress controller<br />
Release: 0.19.0<br />
Build: git-05025d6<br />
Repository: https://github.com/kubernetes/ingress-nginx.git<br />
-------------------------------------------------------------------------------<br />
</pre><br />
<br />
==Kubectl==<br />
<br />
<code>kubectl</code> controls the Kubernetes cluster manager.<br />
<br />
* View your current configuration:<br />
$ kubectl config view<br />
<br />
* Switch between clusters:<br />
$ kubectl config use-context <context_name><br />
<br />
* Remove a cluster:<br />
$ kubectl config unset contexts.<context_name><br />
$ kubectl config unset users.<user_name><br />
$ kubectl config unset clusters.<cluster_name><br />
<br />
* Sort Pods by age:<br />
$ kubectl get po --sort-by='{.firstTimestamp}'.<br />
$ kubectl get pods --all-namespaces --sort-by=.metadata.creationTimestamp<br />
<br />
* Backup all primitives deployed in a given k8s cluster:<br />
<pre><br />
$ kubectl api-resources --verbs=list --namespaced -o name \<br />
| xargs -n1 -I{} bash -c "kubectl get {} --all-namespaces -oyaml && echo ---" \<br />
> k8s_backup.yaml<br />
</pre><br />
<br />
===kubectl explain===<br />
<br />
;List the fields for supported resources.<br />
<br />
* Get the documentation of a resource (aka "kind") and its fields:<br />
<pre><br />
$ kubectl explain deployment<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
APIVersion defines the versioned schema of this representation of an<br />
object. Servers should convert recognized schemas to the latest internal<br />
value, and may reject unrecognized values. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources<br />
<br />
kind <string><br />
Kind is a string value representing the REST resource this object<br />
represents. Servers may infer this from the endpoint the client submits<br />
requests to. Cannot be updated. In CamelCase. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds<br />
<br />
metadata <Object><br />
Standard object metadata.<br />
<br />
spec <Object><br />
Specification of the desired behavior of the Deployment.<br />
<br />
status <Object><br />
Most recently observed status of the Deployment<br />
</pre><br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
</pre><br />
<br />
* Get a list of ''all'' allowable fields for a given primitive:<br />
<pre><br />
$ kubectl explain deployment --recursive | head<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
kind <string><br />
metadata <Object><br />
</pre><br />
<br />
* Get documentation ("man page"-style) for a given field in a given primitive:<br />
<pre><br />
$ kubectl explain deployment.status.availableReplicas<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
FIELD: availableReplicas <integer><br />
<br />
DESCRIPTION:<br />
Total number of available pods (ready for at least minReadySeconds)<br />
targeted by this deployment.<br />
</pre><br />
<br />
===Merge kubeconfig files===<br />
<br />
* Reference which kubeconfig files you wish to merge:<br />
$ export KUBECONFIG=$HOME/.kube/dev.yaml:$HOME/.kube/prod.yaml<br />
<br />
* Flatten them:<br />
$ kubectl config view --flatten >> $HOME/.kube/config<br />
<br />
* Unset:<br />
$ unset KUBECONFIG<br />
<br />
Merge complete.<br />
<br />
==Namespaces==<br />
<br />
See: [https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces] in the official documentation.<br />
<br />
; Create a Namespace<br />
<br />
<pre><br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: dev<br />
</pre><br />
<br />
==Pods==<br />
<br />
; Create a Pod that has an Init Container<br />
<br />
In this example, I will create a Pod that has one application Container and one Init Container. The init container runs to completion before the application container starts.<br />
<br />
<pre><br />
$ cat << EOF >init-demo.yml<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: init-demo<br />
labels:<br />
app: demo<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: /usr/share/nginx/html<br />
# These containers are run during pod initialization<br />
initContainers:<br />
- name: install<br />
image: busybox<br />
command:<br />
- wget<br />
- "-O"<br />
- "/work-dir/index.html"<br />
- https://example.com<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: "/work-dir"<br />
dnsPolicy: Default<br />
volumes:<br />
- name: workdir<br />
emptyDir: {}<br />
EOF<br />
</pre><br />
<br />
The above Pod YAML will first create the init container using the busybox image, which will download the HTML of the example.com website and save it to a file (<code>index.html</code>) on the Pod volume called "workdir". After the init container completes, the Nginx container starts and presents the <code>index.html</code> on port 80 (the file is located at <code>/usr/share/nginx/index.html</code> inside the Nginx container as a volume mount).<br />
<br />
* Now, create this Pod:<br />
$ kubectl create --validate -f init-demo.yml<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat << EOF >example.yml<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: example<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: demo<br />
</pre><br />
<br />
* Check that we can get the header of <nowiki>https://example.com</nowiki>:<br />
$ curl -sI $(kubectl get svc/foo-svc -o jsonpath='{.spec.clusterIP}'):8000 | grep ^HTTP<br />
HTTP/1.1 200 OK<br />
<br />
==Deployments==<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' controller provides declarative updates for Pods and ReplicaSets.<br />
<br />
You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.<br />
<br />
; Creating a Deployment<br />
<br />
The following is an example of a Deployment. It creates a ReplicaSet to bring up three [https://hub.docker.com/_/nginx/ Nginx] Pods:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Check the syntax of the Deployment (YAML):<br />
$ kubectl create -f nginx-deployment.yml --dry-run<br />
deployment.apps/nginx-deployment created (dry run)<br />
<br />
* Create the Deployment:<br />
$ kubectl create --record -f nginx-deployment.yml <br />
deployment "nginx-deployment" created<br />
Note: By appending <code>--record</code> to the above command, we are telling the API to record the current command in the annotations of the created or updated resource. This is useful for future review, such as investigating which commands were executed in each Deployment revision.<br />
<br />
* Get information about our Deployment:<br />
$ kubectl get deployments<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 24s<br />
<br />
$ kubectl describe deployment/nginx-deployment<br />
<pre><br />
Name: nginx-deployment<br />
Namespace: default<br />
CreationTimestamp: Tue, 30 Jan 2018 23:28:43 +0000<br />
Labels: app=nginx<br />
Annotations: deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Selector: app=nginx<br />
Replicas: 3 desired | 3 updated | 3 total | 0 available | 3 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 25% max unavailable, 25% max surge<br />
Pod Template:<br />
Labels: app=nginx<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Conditions:<br />
Type Status Reason<br />
---- ------ ------<br />
Available False MinimumReplicasUnavailable<br />
Progressing True ReplicaSetUpdated<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-6c54bd5869 (3/3 replicas created)<br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal ScalingReplicaSet 28s deployment-controller Scaled up replica set nginx-deployment-6c54bd5869 to 3<br />
</pre><br />
<br />
* Get information about the ReplicaSet created by the above Deployment:<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-6c54bd5869 3 3 3 3m<br />
<br />
$ kubectl describe rs/nginx-deployment-6c54bd5869<br />
<pre><br />
Name: nginx-deployment-6c54bd5869<br />
Namespace: default<br />
Selector: app=nginx,pod-template-hash=2710681425<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Annotations: deployment.kubernetes.io/desired-replicas=3<br />
deployment.kubernetes.io/max-replicas=4<br />
deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Controlled By: Deployment/nginx-deployment<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-k9mh4<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-pphjt<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-n4fj5<br />
</pre><br />
<br />
* Get information about the Pods created by this Deployment:<br />
$ kubectl get pods --show-labels -l app=nginx -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE LABELS<br />
nginx-deployment-6c54bd5869-k9mh4 1/1 Running 0 5m 10.244.1.5 k8s.worker1.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-n4fj5 1/1 Running 0 5m 10.244.1.6 k8s.worker2.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-pphjt 1/1 Running 0 5m 10.244.1.7 k8s.worker3.local app=nginx,pod-template-hash=2710681425<br />
<br />
;Updating a Deployment<br />
<br />
Note: A Deployment's rollout is triggered if, and only if, the Deployment's pod template (that is, <code>.spec.template</code>) is changed (for example, if the labels or container images of the template are updated). Other updates, such as scaling the Deployment, do not trigger a rollout.<br />
<br />
Suppose that we want to update the Nginx Pods in the above Deployment to use the <code>nginx:1.9.1</code> image instead of the <code>nginx:1.7.9</code> image.<br />
<br />
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
deployment "nginx-deployment" image updated<br />
<br />
Alternatively, we can edit the Deployment and change <code>.spec.template.spec.containers[0].image</code> from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>:<br />
<br />
$ kubectl edit deployment/nginx-deployment<br />
deployment "nginx-deployment" edited<br />
<br />
* Check on the rollout status:<br />
<pre><br />
$ kubectl rollout status deployment/nginx-deployment<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
deployment "nginx-deployment" successfully rolled out<br />
</pre><br />
<br />
* Get information about the updated Deployment:<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 18m<br />
<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-5964dfd755 3 3 3 1m # <- new ReplicaSet using nginx:1.9.1<br />
nginx-deployment-6c54bd5869 0 0 0 17m # <- old ReplicaSet using nginx:1.7.9<br />
<br />
$ kubectl rollout history deployment/nginx-deployment<br />
deployments "nginx-deployment"<br />
REVISION CHANGE-CAUSE<br />
1 kubectl create --record=true --filename=nginx-deployment.yml<br />
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
<br />
$ kubectl rollout history deployment/nginx-deployment --revision=2<br />
<br />
deployments "nginx-deployment" with revision #2<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=1520898311<br />
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
Containers:<br />
nginx:<br />
Image: nginx:1.9.1<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
<br />
; Rolling back to a previous revision<br />
<br />
Undo the current rollout and rollback to the previous revision:<br />
$ kubectl rollout undo deployment/nginx-deployment<br />
deployment "nginx-deployment" rolled back<br />
<br />
Alternatively, you can rollback to a specific revision by specify that in --to-revision:<br />
$ kubectl rollout undo deployment/nginx-deployment --to-revision=1<br />
deployment "nginx-deployment" rolled back<br />
<br />
==Volume management==<br />
On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. First, when a container crashes, kubelet will restart it, but the files will be lost (i.e., the container starts with a clean state). Second, when running containers together in a Pod it is often necessary to share files between those containers. The Kubernetes ''[https://kubernetes.io/docs/concepts/storage/volumes/ Volumes]'' abstraction solves both of these problems. A Volume is essentially a directory backed by a storage medium. The storage medium and its content are determined by the Volume Type.<br />
<br />
In Kubernetes, a Volume is attached to a Pod and shared among the containers of that Pod. The Volume has the same life span as the Pod, and it outlives the containers of the Pod &mdash; this allows data to be preserved across container restarts.<br />
<br />
Kubernetes resolves the problem of persistent storage with the Persistent Volume subsystem, which provides APIs for users and administrators to manage and consume storage. To manage the Volume, it uses the PersistentVolume (PV) API resource type, and to consume it, it uses the PersistentVolumeClaim (PVC) API resource type.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes PersistentVolume] (PV) : a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims PersistentVolumeClaim] (PVC) : a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Persistent Volume Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).<br />
<br />
A Persistent Volume is a network-attached storage in the cluster, which is provisioned by the administrator.<br />
<br />
Persistent Volumes can be provisioned statically by the administrator, or dynamically, based on the StorageClass resource. A StorageClass contains pre-defined provisioners and parameters to create a Persistent Volume.<br />
<br />
A PersistentVolumeClaim (PVC) is a request for storage by a user. Users request Persistent Volume resources based on size, access modes, etc. Once a suitable Persistent Volume is found, it is bound to a Persistent Volume Claim. After a successful bind, the Persistent Volume Claim resource can be used in a Pod. Once a user finishes its work, the attached Persistent Volumes can be released. The underlying Persistent Volumes can then be reclaimed and recycled for future usage. See [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims Persistent Volumes] for details.<br />
<br />
;Access Modes<br />
* Each of the following access modes ''must'' be supported by storage resource provider (e.g., NFS, AWS EBS, etc.) if they are to be used.<br />
* ReadWriteOnce (RWO) &mdash; volume can be mounted as read/write by one node only.<br />
* ReadOnlyMany (ROX) &mdash; volume can be mounted read-only by many nodes.<br />
* ReadWriteMany (RWX) &mdash; volume can be mounted read/write by many nodes.<br />
A volume can only be mounted using one access mode at a time, regardless of the modes that are supported.<br />
<br />
; Example #1 - Using Host Volumes<br />
As an example of how to use volumes, we can modify our previous "webserver" Deployment (see above) to look like the following:<br />
<br />
$ cat webserver.yml<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: hostvol<br />
mountPath: /usr/share/nginx/html<br />
volumes:<br />
- name: hostvol<br />
hostPath:<br />
path: /home/docker/vol<br />
</pre><br />
<br />
And use the same Service:<br />
$ cat webserver-svc.yml<br />
<pre><br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
</pre><br />
<br />
Then create the deployment and service:<br />
$ kubectl create -f webserver.yml<br />
$ kubectl create -f webserver-svc.yml<br />
<br />
Then, SSH into the webserver and run the following commands<br />
$ minikube ssh<br />
minikube> mkdir -p /home/docker/vol<br />
minikube> echo "Christoph testing" > /home/docker/vol/index.html<br />
minikube> exit<br />
<br />
Get the webserver IP and port:<br />
$ minikube ip<br />
192.168.99.100<br />
$ kubectl get svc/web-service -o json | jq '.spec.ports[].nodePort'<br />
32610<br />
# OR<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
$ curl <nowiki>http://192.168.99.100:32610</nowiki><br />
Christoph testing<br />
<br />
; Example #2 - Using NFS<br />
<br />
* First, create a server to host your NFS server (e.g., <code>`sudo apt-get install -y nfs-kernel-server`</code>).<br />
* On your NFS server, do the following:<br />
$ mkdir -p /var/nfs/general<br />
$ cat << EOF >>/etc/exports<br />
/var/nfs/general 10.100.1.2(rw,sync,no_subtree_check) 10.100.1.3(rw,sync,no_subtree_check) 10.100.1.4(rw,sync,no_subtree_check)<br />
EOF<br />
where the <code>10.x</code> IPs are the private IPs of your k8s nodes (both Master and Worker nodes).<br />
* Make sure to install <code>nfs-common</code> on each of the k8s nodes that will be connecting to the NFS server.<br />
<br />
Now, on the k8s Master node, create a Persistent Volume (PV) and Persistent Volume Claim (PVC):<br />
<br />
* Create a Persistent Volume (PV):<br />
$ cat << EOF >pv.yml<br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 10.100.1.10 # NFS Server's private IP<br />
readOnly: false<br />
EOF<br />
$ kubectl create --validate -f pv.yml<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Available<br />
* Create a Persistent Volume Claim (PVC):<br />
$ cat << EOF >pvc.yml<br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
EOF<br />
$ kubectl create --validate -f pvc.yml<br />
$ kubectl get pvc<br />
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE<br />
nfs-pvc Bound mypv 1Gi RWX<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Bound default/nfs-pvc 11m<br />
<br />
* Create a Pod:<br />
$ cat << EOF >nfs-pod.yml <br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nfs-pod<br />
labels:<br />
name: nfs-pod<br />
spec:<br />
containers:<br />
- name: nfs-ctn<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
volumeMounts:<br />
- name: nfsvol<br />
mountPath: /tmp<br />
restartPolicy: Always<br />
securityContext:<br />
fsGroup: 65534<br />
runAsUser: 65534<br />
volumes:<br />
- name: nfsvol<br />
persistentVolumeClaim:<br />
claimName: nfs-pvc<br />
EOF<br />
$ kubectl create --validate -f nfs-pod.yml<br />
$ kubectl get pods -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE<br />
busybox 1/1 Running 9 2d 10.244.2.22 k8s.worker01.local<br />
<br />
* Get a shell from the <code>nfs-pod</code> Pod:<br />
$ kubectl exec -it nfs-pod -- sh<br />
/ $ df -h<br />
Filesystem Size Used Available Use% Mounted on<br />
172.31.119.58:/var/nfs/general<br />
19.3G 1.8G 17.5G 9% /tmp<br />
...<br />
/ $ touch /tmp/this-is-from-the-pod<br />
<br />
* On the NFS server:<br />
$ ls -l /var/nfs/general/<br />
total 0<br />
-rw-r--r-- 1 nobody nogroup 0 Jan 18 23:32 this-is-from-the-pod<br />
<br />
It works!<br />
<br />
==ConfigMaps and Secrets==<br />
While deploying an application, we may need to pass such runtime parameters like configuration details, passwords, etc. For example, let's assume we need to deploy ten different applications for our customers, and, for each customer, we just need to change the name of the company in the UI. Instead of creating ten different Docker images for each customer, we can just use the template image and pass the customers' names as a runtime parameter. In such cases, we can use the ConfigMap API resource. Similarly, when we want to pass sensitive information, we can use the Secret API resource. Think ''Secrets'' (for confidential data) and ''ConfigMaps'' (for non-confidential data).<br />
<br />
[https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/ ConfigMaps] allow you to decouple configuration artifacts from image content to keep containerized applications portable. Using ConfigMaps, we can pass configuration details as key-value pairs, which can be later consumed by Pods or any other system components, such as controllers. We can create ConfigMaps in two ways:<br />
<br />
* From literal values; and<br />
* From files.<br />
<br />
<br />
;ConfigMaps<br />
<br />
* Create a ConfigMap:<br />
$ kubectl create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2<br />
configmap "my-config" created<br />
$ kubectl get configmaps my-config -o yaml<br />
<pre><br />
apiVersion: v1<br />
data:<br />
key1: value1<br />
key2: value2<br />
kind: ConfigMap<br />
metadata:<br />
creationTimestamp: 2018-01-11T23:57:44Z<br />
name: my-config<br />
namespace: default<br />
resourceVersion: "117110"<br />
selfLink: /api/v1/namespaces/default/configmaps/my-config<br />
uid: 37a43e39-f72b-11e7-8370-08002721601f<br />
</pre><br />
$ kubectl describe configmap/my-config<br />
<pre><br />
Name: my-config<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Data<br />
====<br />
key2:<br />
----<br />
value2<br />
key1:<br />
----<br />
value1<br />
Events: <none><br />
</pre><br />
<br />
; Create a ConfigMap from a configuration file<br />
<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: ConfigMap<br />
metadata:<br />
name: customer1<br />
data:<br />
TEXT1: Customer1_Company<br />
TEXT2: Welcomes You<br />
COMPANY: Customer1 Company Technology, LLC.<br />
EOF<br />
</pre><br />
<br />
We can get the values of the given key as environment variables inside a Pod. In the following example, while creating the Deployment, we are assigning values for environment variables from the customer1 ConfigMap:<br />
<pre><br />
....<br />
containers:<br />
- name: my-app<br />
image: foobar<br />
env:<br />
- name: MONGODB_HOST<br />
value: mongodb<br />
- name: TEXT1<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT1<br />
- name: TEXT2<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT2<br />
- name: COMPANY<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: COMPANY<br />
....<br />
</pre><br />
With the above, we will get the <code>TEXT1</code> environment variable set to <code>Customer1_Company</code>, <code>TEXT2</code> environment variable set to <code>Welcomes You</code>, and so on.<br />
<br />
We can also mount a ConfigMap as a Volume inside a Pod. For each key, we will see a file in the mount path and the content of that file become the respective key's value. For details, see [https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#adding-configmap-data-to-a-volume here].<br />
<br />
You can also use ConfigMaps to configure your cluster to use, as an example, 8.8.8.8 and 8.8.4.4 as its upstream DNS server:<br />
<pre><br />
kind: ConfigMap<br />
apiVersion: v1<br />
metadata:<br />
name: kube-dns<br />
namespace: kube-system<br />
data:<br />
upstreamNameservers: |<br />
["8.8.8.8", "8.8.4.4"]<br />
</pre><br />
<br />
; Secrets<br />
<br />
Objects of type [https://kubernetes.io/docs/concepts/configuration/secret/ Secret] are intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a Secret is safer and more flexible than putting it verbatim in a pod definition or in a docker image.<br />
<br />
As an example, assume that we have a Wordpress blog application, in which our <code>wordpress</code> frontend connects to the [[MySQL]] database backend using a password. While creating the Deployment for <code>wordpress</code>, we can put the MySQL password in the Deployment's YAML file, but the password would not be protected. The password would be available to anyone who has access to the configuration file.<br />
<br />
In situations such as the one we just mentioned, the Secret object can help. With Secrets, we can share sensitive information like passwords, tokens, or keys in the form of key-value pairs, similar to ConfigMaps; thus, we can control how the information in a Secret is used, reducing the risk for accidental exposures. In Deployments or other system components, the Secret object is ''referenced'', without exposing its content.<br />
<br />
It is important to keep in mind that the Secret data is stored as plain text inside etcd. Administrators must limit the access to the API Server and etcd.<br />
<br />
To create a Secret using the <code>`kubectl create secret`</code> command, we need to first create a file with a password, and then pass it as an argument.<br />
<br />
* Create a file with your MySQL password:<br />
$ echo mysqlpasswd | tr -d '\n' > password.txt<br />
<br />
* Create the ''Secret'':<br />
$ kubectl create secret generic mysql-passwd --from-file=password.txt<br />
$ kubectl describe secret/mysql-passwd<br />
<pre><br />
Name: mysql-passwd<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Type: Opaque<br />
<br />
Data<br />
====<br />
password.txt: 11 bytes<br />
</pre><br />
<br />
We can also create a Secret manually, using the YAML configuration file. With Secrets, each object data must be encoded using base64. If we want to have a configuration file for our Secret, we must first get the base64 encoding for our password:<br />
<br />
$ cat password.txt | base64<br />
bXlzcWxwYXNzd2Q==<br />
<br />
and then use it in the configuration file:<br />
<pre><br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysql-passwd<br />
type: Opaque<br />
data:<br />
password: bXlzcWxwYXNzd2Q=<br />
</pre><br />
Note that base64 encoding does not do any encryption and anyone can easily decode it:<br />
<br />
$ echo "bXlzcWxwYXNzd2Q=" | base64 -d # => mysqlpasswd<br />
<br />
Therefore, make sure you do not commit a Secret's configuration file in the source code.<br />
<br />
We can get Secrets to be used by containers in a Pod by mounting them as data volumes, or by exposing them as environment variables.<br />
<br />
We can reference a Secret and assign the value of its key as an environment variable (<code>WORDPRESS_DB_PASSWORD</code>):<br />
<pre><br />
.....<br />
spec:<br />
containers:<br />
- image: wordpress:4.7.3-apache<br />
name: wordpress<br />
env:<br />
- name: WORDPRESS_DB_HOST<br />
value: wordpress-mysql<br />
- name: WORDPRESS_DB_PASSWORD<br />
valueFrom:<br />
secretKeyRef:<br />
name: my-password<br />
key: password.txt<br />
.....<br />
</pre><br />
<br />
Or, we can also mount a Secret as a Volume inside a Pod. A file would be created for each key mentioned in the Secret, whose content would be the respective value. See [https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod here] for details.<br />
<br />
==Ingress==<br />
Among the ServiceTypes mentioned earlier, NodePort and LoadBalancer are the most often used. For the LoadBalancer ServiceType, we need to have the support from the underlying infrastructure. Even after having the support, we may not want to use it for every Service, as LoadBalancer resources are limited and they can increase costs significantly. Managing the NodePort ServiceType can also be tricky at times, as we need to keep updating our proxy settings and keep track of the assigned ports. In this section, we will explore the Ingress API object, which is another method we can use to access our applications from the external world.<br />
<br />
An ''[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingress]'' is a collection of rules that allow inbound connections to reach the cluster Services. With Services, routing rules are attached to a given Service. They exist for as long as the Service exists. If we can somehow decouple the routing rules from the application, we can then update our application without worrying about its external access. This can be done using the Ingress resource. Ingress can provide load balancing, SSL/TLS termination, and name-based virtual hosting and/or routing.<br />
<br />
To allow the inbound connection to reach the cluster Services, Ingress configures a Layer 7 HTTP load balancer for Services and provides the following:<br />
<br />
* TLS (Transport Layer Security)<br />
* Name-based virtual hosting <br />
* Path-based routing<br />
* Custom rules.<br />
<br />
With Ingress, users do not connect directly to a Service. Users reach the Ingress endpoint, and, from there, the request is forwarded to the respective Service. You can see an example of an example Ingress definition below:<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: web-ingress<br />
spec:<br />
rules:<br />
- host: blue.example.com<br />
http:<br />
paths:<br />
- backend: <br />
serviceName: blue-service<br />
servicePort: 80<br />
- host: green.example.com<br />
http:<br />
paths:<br />
- backend:<br />
serviceName: green-service<br />
servicePort: 80<br />
</pre><br />
<br />
According to the example just provided, users requests to both <code>blue.example.com</code> and <code>green.example.com</code> would go to the same Ingress endpoint, and, from there, they would be forwarded to <code>blue-service</code>, and <code>green-service</code>, respectively. Here, we have seen an example of a Name-Based Virtual Hosting Ingress rule. <br />
<br />
We can also have Fan Out Ingress rules, in which we send requests like <code>example.com/blue</code> and <code>example.com/green</code>, which would be forwarded to <code>blue-service</code> and <code>green-service</code>, respectively.<br />
<br />
To secure an Ingress, you must create a ''Secret''. The TLS secret must contain keys named <code>tls.crt</code> and <code>tls.key</code>, which contain the certificate and private key to use for TLS.<br />
<br />
The Ingress resource does not do any request forwarding by itself. All of the magic is done using the ''Ingress Controller''.<br />
<br />
; Ingress Controller<br />
<br />
An Ingress Controller is an application which watches the Master Node's API Server for changes in the Ingress resources and updates the Layer 7 load balancer accordingly. Kubernetes has different Ingress Controllers, and, if needed, we can also build our own. GCE L7 Load Balancer and Nginx Ingress Controller are examples of Ingress Controllers.<br />
<br />
Minikube v0.14.0 and above ships the Nginx Ingress Controller setup as an add-on. It can be easily enabled by running the following command:<br />
<br />
$ minikube addons enable ingress<br />
<br />
Once the Ingress Controller is deployed, we can create an Ingress resource using the <code>kubectl create</code> command. For example, if we create an <code>example-ingress.yml</code> file with the content above, then, we can use the following command to create an Ingress resource:<br />
<br />
$ kubectl create -f example-ingress.yml<br />
<br />
With the Ingress resource we just created, we should now be able to access the blue-service or green-service services using blue.example.com and green.example.com URLs. As our current setup is on minikube, we will need to update the host configuration file on our workstation to the minikube's IP for those URLs:<br />
<br />
$ cat /etc/hosts<br />
127.0.0.1 localhost<br />
::1 localhost<br />
192.168.99.100 blue.example.com green.example.com <br />
<br />
Once this is done, we can now open blue.example.com and green.example.com in a browser and access the application.<br />
<br />
==Labels and Selectors==<br />
''[https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ Labels]'' are key-value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key-value labels defined. Each key must be unique for a given object.<br />
<pre><br />
"labels": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
<br />
;Syntax and character set<br />
<br />
Labels are key-value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (<code>/</code>). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (<code>.</code>), not longer than 253 characters in total, followed by a slash (<code>/</code>). If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. kube-scheduler, kube-controller-manager, kube-apiserver, kubectl, or other third-party automation) which add labels to end-user objects must specify a prefix. The <code>kubernetes.io/</code> prefix is reserved for Kubernetes core components.<br />
<br />
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between.<br />
<br />
;Label selectors<br />
<br />
Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).<br />
<br />
Via a label selector, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.<br />
<br />
The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical AND (<code>&&</code>) operator.<br />
<br />
An empty label selector (that is, one with zero requirements) selects every object in the collection.<br />
<br />
A null label selector (which is only possible for optional selector fields) selects no objects.<br />
<br />
Note: the label selectors of two controllers must not overlap within a namespace, otherwise they will fight with each other.<br />
Note that labels are not restricted to pods. You can apply them to all sorts of objects, such as nodes or services.<br />
<br />
;Examples<br />
<br />
* Label a given node:<br />
$ kubectl label node k8s.worker1.local network=gigabit<br />
<br />
* With ''Equality-based'', one may write:<br />
$ kubectl get pods -l environment=production,tier=frontend<br />
<br />
* Using ''set-based'' requirements:<br />
$ kubectl get pods -l 'environment in (production),tier in (frontend)'<br />
<br />
* Implement the OR operator on values:<br />
$ kubectl get pods -l 'environment in (production, qa)'<br />
<br />
* Restricting negative matching via exists operator:<br />
$ kubectl get pods -l 'environment,environment notin (frontend)'<br />
<br />
* Show the current labels on your pods:<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Add a label to an already running/existing pod:<br />
$ kubectl label pods busybox owner=christoph<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d owner=christoph<br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Select a pod by its label:<br />
$ kubectl get pods --selector owner=christoph<br />
#~OR~<br />
$ kubectl get pods -l owner=christoph<br />
NAME READY STATUS RESTARTS AGE<br />
busybox 1/1 Running 25 9d<br />
<br />
* Delete/remove a given label from a given pod:<br />
$ kubectl label pod busybox owner-<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
<br />
* Get all pods that belong to both the <code>production</code> ''and'' the <code>development</code> environments:<br />
$ kubectl get pods -l 'env in (production, development)'<br />
<br />
; Using Labels to select a Node on which to schedule a Pod:<br />
<br />
* Label a Node that uses SSDs as its primary HDD:<br />
$ kubectl label node k8s.worker1.local hdd=ssd<br />
<br />
<pre><br />
$ cat << EOF >busybox.yml<br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "300"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
nodeSelector: <br />
hdd: ssd<br />
EOF<br />
</pre><br />
<br />
==Annotations==<br />
With ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ Annotations]'', we can attach arbitrary, non-identifying metadata to objects, in a key-value format:<br />
<br />
<pre><br />
"annotations": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.<br />
<br />
In contrast to Labels, annotations are not used to identify and select objects. Annotations can be used to:<br />
<br />
* Store build/release IDs, which git branch, etc.<br />
* Phone numbers of persons responsible or directory entries specifying where such information can be found<br />
* Pointers to logging, monitoring, analytics, audit repositories, debugging tools, etc.<br />
* Etc.<br />
<br />
For example, while creating a Deployment, we can add a description like the one below:<br />
<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
annotations:<br />
description: Deployment based PoC dates 12 January 2018<br />
....<br />
....<br />
</pre><br />
<br />
We can look at annotations while describing an object:<br />
<br />
<pre><br />
$ kubectl describe deployment webserver<br />
Name: webserver<br />
Namespace: default<br />
CreationTimestamp: Fri, 12 Jan 2018 13:18:23 -0800<br />
Labels: app=webserver<br />
Annotations: deployment.kubernetes.io/revision=1<br />
description=Deployment based PoC dates 12 January 2018<br />
...<br />
...<br />
</pre><br />
<br />
==Jobs and CronJobs==<br />
<br />
===Jobs===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#what-is-a-job Job]'' creates one or more pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the Job itself is complete. Deleting a Job will cleanup the pods it created.<br />
<br />
A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).<br />
<br />
A Job can also be used to run multiple Pods in parallel.<br />
<br />
; Example<br />
<br />
* Below is an example ''Job'' config. It computes π to 2000 places and prints it out. It takes around 10 seconds to complete.<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
$ kubctl create -f ./job-pi.yml<br />
job "pi" created<br />
$ kubectl describe jobs/pi<br />
<pre><br />
Name: pi<br />
Namespace: default<br />
Selector: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Annotations: <none><br />
Parallelism: 1<br />
Completions: 1<br />
Start Time: Fri, 12 Jan 2018 13:25:23 -0800<br />
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Containers:<br />
pi:<br />
Image: perl<br />
Port: <none><br />
Command:<br />
perl<br />
-Mbignum=bpi<br />
-wle<br />
print bpi(2000)<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 8s job-controller Created pod: pi-rfvvw<br />
</pre><br />
<br />
* Get the result of the Job run (i.e., the value of π):<br />
$ pods=$(kubectl get pods --show-all --selector=job-name=pi --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
pi-rfvvw<br />
$ kubectl logs ${pods}<br />
3.1415926535897932384626433832795028841971693...<br />
<br />
===CronJobs===<br />
<br />
Support for creating ''Jobs'' at specified times/dates (i.e. cron) is available in Kubernetes 1.4. See [https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ here] for details.<br />
<br />
Below is an example ''CronJob''. Every minute, it runs a simple job to print current time and then echo a "hello" string:<br />
$ cat << EOF >cronjob.yml<br />
apiVersion: batch/v1beta1<br />
kind: CronJob<br />
metadata:<br />
name: hello<br />
spec:<br />
schedule: "*/1 * * * *"<br />
jobTemplate:<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: hello<br />
image: busybox<br />
args:<br />
- /bin/sh<br />
- -c<br />
- date; echo Hello from the Kubernetes cluster<br />
restartPolicy: OnFailure<br />
EOF<br />
<br />
$ kubectl create -f cronjob.yml<br />
cronjob "hello" created<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 <none> 11s<br />
<br />
$ kubectl get jobs --watch<br />
NAME DESIRED SUCCESSFUL AGE<br />
hello-1515793140 1 1 7s<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 22s 48s<br />
<br />
$ pods=$(kubectl get pods -a --selector=job-name=hello-1515793140 --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
hello-1515793140-plp8g<br />
<br />
$ kubectl logs $pods<br />
Fri Jan 12 21:39:07 UTC 2018<br />
Hello from the Kubernetes cluster<br />
<br />
* Cleanup<br />
$ kubectl delete cronjob hello<br />
<br />
==Quota Management==<br />
When there are many users sharing a given Kubernetes cluster, there is always a concern for fair usage. To address this concern, administrators can use the ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ ResourceQuota]'' object, which provides constraints that limit aggregate resource consumption per Namespace.<br />
<br />
We can have the following types of quotas per Namespace:<br />
<br />
* Compute Resource Quota: We can limit the total sum of compute resources (CPU, memory, etc.) that can be requested in a given Namespace.<br />
* Storage Resource Quota: We can limit the total sum of storage resources (PersistentVolumeClaims, requests.storage, etc.) that can be requested.<br />
* Object Count Quota: We can restrict the number of objects of a given type (pods, ConfigMaps, PersistentVolumeClaims, ReplicationControllers, Services, Secrets, etc.).<br />
<br />
==Daemon Sets==<br />
In some cases, like collecting monitoring data from all nodes, or running a storage daemon on all nodes, etc., we need a specific type of Pod running on all nodes at all times. A ''[https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ DaemonSet]'' is the object that allows us to do just that. <br />
<br />
Whenever a node is added to the cluster, a Pod from a given DaemonSet is created on it. When the node dies, the respective Pods are garbage collected. If a DaemonSet is deleted, all Pods it created are deleted as well.<br />
<br />
Example DaemonSet:<br />
<pre><br />
kind: DaemonSet<br />
apiVersion: apps/v1<br />
metadata:<br />
name: pause-ds<br />
spec:<br />
selector:<br />
matchLabels:<br />
quiet: "pod"<br />
template:<br />
metadata:<br />
labels:<br />
quiet: pod<br />
spec:<br />
tolerations:<br />
- key: node-role.kubernetes.io/master<br />
effect: NoSchedule<br />
containers:<br />
- name: pause-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Stateful Sets==<br />
The ''[https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ StatefulSet]'' controller is used for applications which require a unique identity, such as name, network identifications, strict ordering, etc. For example, MySQL cluster, etcd cluster.<br />
<br />
The StatefulSet controller provides identity and guaranteed ordering of deployment and scaling to Pods.<br />
<br />
Note: Before Kubernetes 1.5, the StatefulSet controller was referred to as ''PetSet''.<br />
<br />
==Role Based Access Control (RBAC)==<br />
''[https://kubernetes.io/docs/admin/authorization/rbac/ Role-based access control]'' (RBAC) is an authorization mechanism for managing permissions around Kubernetes resources.<br />
<br />
Using the RBAC API, we define a role which contains a set of additive permissions. Within a Namespace, a role is defined using the Role object. For a cluster-wide role, we need to use the ClusterRole object.<br />
<br />
Once the roles are defined, we can bind them to a user or a set of users using ''RoleBinding'' and ''ClusterRoleBinding''.<br />
<br />
===Using RBAC with minikube===<br />
<br />
* Start up minikube with RBAC support:<br />
$ minikube start --kubernetes-version=v1.9.0 --extra-config=apiserver.Authorization.Mode=RBAC<br />
<br />
* Setup RBAC:<br />
<pre><br />
$ cat rbac-cluster-role-binding.yml<br />
# kubectl create clusterrolebinding add-on-cluster-admin \<br />
# --clusterrole=cluster-admin --serviceaccount=kube-system:default<br />
#<br />
kind: ClusterRoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1alpha1<br />
metadata:<br />
name: kube-system-sa<br />
subjects:<br />
- kind: Group<br />
name: system:sericeaccounts:kube-system<br />
roleRef:<br />
kind: ClusterRole<br />
name: cluster-admin<br />
apiGroup: rbac.authorization.k8s.io<br />
</pre><br />
<br />
<pre><br />
$ cat rbac-setup.yml <br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: viewer<br />
namespace: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: admin<br />
namespace: rbac<br />
</pre><br />
<br />
* Create a Role Binding:<br />
<pre><br />
# kubectl create rolebinding reader-binding \<br />
# --clusterrole=reader \<br />
# --user=serviceaccount:reader \<br />
# --namespace:rbac<br />
#<br />
kind: RoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: rbac<br />
name: reader-binding<br />
roleRef:<br />
apiGroup: rbac.authorization.k8s.io<br />
kind: Role<br />
name: reader<br />
subjects:<br />
- apiGroup: rbac.authorization.k8s.io<br />
kind: ServiceAccount<br />
name: reader<br />
</pre><br />
<br />
* Create a Role:<br />
<pre><br />
$ cat rbac-role.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: default<br />
name: reader<br />
rules:<br />
- apiGroups: [""]<br />
resources: ["*"]<br />
verbs: ["get", "watch", "list"]<br />
</pre><br />
<br />
* Create an RBAC "core reader" Role with specific resources and "verbs" (i.e., the "core reader" role can "get"/"list"/etc. on specific resources (e.g., Pods, Jobs, Deployments, etc.):<br />
<pre><br />
$ cat rbac-role-core-reader.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: core-reader<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods<br />
- configmaps<br />
- secrets<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- jobs<br />
- deployments<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
</pre><br />
<br />
* "Gotchas":<br />
<pre><br />
$ cat rbac-gotcha-1.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-1<br />
rules:<br />
- nonResourceURLs:<br />
- /healthz<br />
verbs:<br />
- get<br />
- post<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- deployments<br />
verbs:<br />
- "*"<br />
</pre><br />
<pre><br />
$ cat rbac-gotcha-2.yml <br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-2<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- secrets<br />
verbs:<br />
- "*"<br />
resourceNames:<br />
- "my_secret"<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods/logs<br />
verbs:<br />
- "get"<br />
</pre><br />
<br />
; Privilege escalation<br />
* You cannot create a Role or ClusterRole that grants permissions you do not have.<br />
* You cannot create a RoleBinding or ClusterRoleBinding that binds to a Role with permissions you do not have (unless you have been explicitly given "bind" permission on the role).<br />
<br />
* Grant explicit bind access:<br />
<pre><br />
kind: ClusterRole<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: role-grantor<br />
rules:<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["rolebindings"]<br />
verbs: ["create"]<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["clusterroles"]<br />
verbs: ["bind"]<br />
resourceNames: ["admin", "edit", "view"]<br />
</pre><br />
<br />
===Testing RBAC permissions===<br />
<br />
* Example of RBAC not allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
no - Required "container.pods.create" permission.<br />
</pre><br />
<br />
* Example of RBAC allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
yes<br />
</pre><br />
<br />
* A more complex example:<br />
<pre><br />
$ kubectl auth can-i update deployments.apps \<br />
--subresource="scale" --as-group="$group" --as="$user" -n $ns<br />
</pre><br />
<br />
==Federation==<br />
With the ''[https://kubernetes.io/docs/concepts/cluster-administration/federation/ Kubernetes Cluster Federation]'' we can manage multiple Kubernetes clusters from a single control plane. We can sync resources across the clusters, and have cross cluster discovery. This allows us to do Deployments across regions and access them using a global DNS record.<br />
<br />
Federation is very useful when we want to build a hybrid solution, in which we can have one cluster running inside our private datacenter and another one on the public cloud. We can also assign weights for each cluster in the Federation, to distribute the load as per our choice.<br />
<br />
==Helm==<br />
To deploy an application, we use different Kubernetes manifests, such as Deployments, Services, Volume Claims, Ingress, etc. Sometimes, it can be tiresome to deploy them one by one. We can bundle all those manifests after templatizing them into a well-defined format, along with other metadata. Such a bundle is referred to as ''Chart''. These Charts can then be served via repositories, such as those that we have for rpm and deb packages. <br />
<br />
''[https://github.com/kubernetes/helm Helm]'' is a package manager (analogous to yum and apt) for Kubernetes, which can install/update/delete those Charts in the Kubernetes cluster.<br />
<br />
Helm has two components:<br />
<br />
* A client called helm, which runs on your user's workstation; and<br />
* A server called tiller, which runs inside your Kubernetes cluster.<br />
<br />
The client helm connects to the server tiller to manage Charts. Charts submitted for Kubernetes are available [https://github.com/kubernetes/charts here].<br />
<br />
==Monitoring and logging==<br />
In Kubernetes, we have to collect resource usage data by Pods, Services, nodes, etc, to understand the overall resource consumption and to take decisions for scaling a given application. Two popular Kubernetes monitoring solutions are Heapster and Prometheus.<br />
<br />
[https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/ Heapster] is a cluster-wide aggregator of monitoring and event data, which is natively supported on Kubernetes. <br />
<br />
[https://prometheus.io/ Prometheus], now part of [https://www.cncf.io/ CNCF] (Cloud Native Computing Foundation), can also be used to scrape the resource usage from different Kubernetes components and objects. Using its client libraries, we can also instrument the code of our application.<br />
<br />
Another important aspect for troubleshooting and debugging is Logging, in which we collect the logs from different components of a given system. In Kubernetes, we can collect logs from different cluster components, objects, nodes, etc. The most common way to collect the logs is using [https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/ Elasticsearch], which uses [https://www.fluentd.org/ fluentd] with custom configuration as an agent on the nodes. fluentd is an open source data collector, which is also part of CNCF.<br />
<br />
[https://github.com/google/cadvisor cAdvisor] is an open source container resource usage and performance analysis agent. It auto-discovers all containers on a node and collects CPU, memory, file system, and network usage statistics. It provides overall machine usage by analyzing the "root" container on the machine. It exposes a simple UI for local containers on port 4194.<br />
<br />
==Security==<br />
===Configure network policies===<br />
A ''[https://kubernetes.io/docs/concepts/services-networking/network-policies/ Network Policy]'' is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.<br />
<br />
''NetworkPolicy'' resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.<br />
<br />
* Specification of how groups of pods may communicate<br />
* Use labels to select pods and define rules<br />
* Implemented by the network plugin<br />
* Pods are non-isolated by default<br />
* Pods are isolated when a Network Policy selects them<br />
<br />
;Example NetworkPolicy<br />
Create a "default" isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods:<br />
<pre><br />
apiVersion: networking.k8s.io/v1<br />
kind: NetworkPolicy<br />
metadata:<br />
name: default-deny<br />
spec:<br />
podSelector: {}<br />
policyTypes:<br />
- Ingress<br />
</pre><br />
<br />
===TLS certificates for cluster components===<br />
Get [https://github.com/OpenVPN/easy-rsa easy-rsa].<br />
<br />
$ ./easyrsa init-pki<br />
$ MASTER_IP=10.100.1.2<br />
$ ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass<br />
<br />
$ cat rsa-request.sh<br />
<pre><br />
#!/bin/bash<br />
./easyrsa --subject-alt-name="IP:${MASTER_IP}," \<br />
"DNS:kubernetes," \<br />
"DNS:kubernetes.default," \<br />
"DNS:kubernetes.default.svc," \<br />
"DNS:kubernetes.default.svc.cluster," \<br />
"DNS:kubernetes.default.svc.cluster.local" \<br />
--days=10000 \<br />
build-server-full server nopass<br />
</pre><br />
<br />
<pre><br />
pki/<br />
├── ca.crt<br />
├── certs_by_serial<br />
│ └── F3A6F7D34BC84330E7375FA20C8441DF.pem<br />
├── index.txt<br />
├── index.txt.attr<br />
├── index.txt.old<br />
├── issued<br />
│ └── server.crt<br />
├── private<br />
│ ├── ca.key<br />
│ └── server.key<br />
├── reqs<br />
│ └── server.req<br />
├── serial<br />
└── serial.old<br />
</pre><br />
<br />
* Figure out what are the paths of the old TLS certs/keys with the following command:<br />
<pre><br />
$ ps aux | grep [a]piserver | sed -n -e 's/^.*\(kube-apiserver \)/\1/p' | tr ' ' '\n'<br />
kube-apiserver<br />
--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota<br />
--requestheader-extra-headers-prefix=X-Remote-Extra-<br />
--advertise-address=172.31.118.138<br />
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt<br />
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt<br />
--requestheader-username-headers=X-Remote-User<br />
--service-cluster-ip-range=10.96.0.0/12<br />
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key<br />
--secure-port=6443<br />
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key<br />
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname<br />
--requestheader-group-headers=X-Remote-Group<br />
--requestheader-allowed-names=front-proxy-client<br />
--service-account-key-file=/etc/kubernetes/pki/sa.pub<br />
--insecure-port=0<br />
--enable-bootstrap-token-auth=true<br />
--allow-privileged=true<br />
--client-ca-file=/etc/kubernetes/pki/ca.crt<br />
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt<br />
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key<br />
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt<br />
--authorization-mode=Node,RBAC<br />
--etcd-servers=http://127.0.0.1:2379<br />
</pre><br />
<br />
===Security Contexts===<br />
A ''[https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Security Context]'' defines privilege and access control settings for a Pod or Container. Security context settings include:<br />
<br />
* Discretionary Access Control: Permission to access an object, like a file, is based on user ID (UID) and group ID (GID).<br />
* Security Enhanced Linux (SELinux): Objects are assigned security labels.<br />
* Running as privileged or unprivileged.<br />
* Linux Capabilities: Give a process some privileges, but not all the privileges of the root user.<br />
* AppArmor: Use program profiles to restrict the capabilities of individual programs.<br />
* Seccomp: Limit a process's access to open file descriptors.<br />
* AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This boolean directly controls whether the <code>no_new_privs</code> flag gets set on the container process. <code>AllowPrivilegeEscalation</code> is true always when the container is: 1) run as Privileged; or 2) has <code>CAP_SYS_ADMIN</code>.<br />
<br />
; Example #1<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: security-context-demo<br />
spec:<br />
securityContext:<br />
runAsUser: 1000<br />
fsGroup: 2000<br />
volumes:<br />
- name: sec-ctx-vol<br />
emptyDir: {}<br />
containers:<br />
- name: sec-ctx-demo<br />
image: gcr.io/google-samples/node-hello:1.0<br />
volumeMounts:<br />
- name: sec-ctx-vol<br />
mountPath: /data/demo<br />
securityContext:<br />
allowPrivilegeEscalation: false<br />
</pre><br />
<br />
==Taints and tolerations==<br />
[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature Node affinity] is a property of pods that ''attracts'' them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite – they allow a node to ''repel'' a set of pods.<br />
<br />
[https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ Taints and tolerations] work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks the node such that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.<br />
<br />
==Remove a node from a cluster==<br />
<br />
* On the k8s Master Node:<br />
k8s-master> $ kubectl drain k8s-worker-02 --ignore-daemonsets<br />
<br />
* On the k8s Worker Node (the one you wish to remove from the cluster):<br />
k8s-worker-02> $ kubeadm reset<br />
[preflight] Running pre-flight checks.<br />
[reset] Stopping the kubelet service.<br />
[reset] Unmounting mounted directories in "/var/lib/kubelet"<br />
[reset] Removing kubernetes-managed containers.<br />
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.<br />
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]<br />
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]<br />
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]<br />
<br />
==Networking==<br />
<br />
; Useful network ranges<br />
* Choose ranges for the Pods and Service CIDR blocks<br />
* Generally, any of the RFC-1918 ranges work well<br />
** 10.0.0.0/8<br />
** 172.0.0.0/11<br />
** 192.168.0.0/16<br />
<br />
Every Pod can communicate directly with every other Pod<br />
<br />
;K8s Node<br />
* A general purpose compute that has at least one interface<br />
** The host OS will have a real-world IP for accessing the machine<br />
** K8s Pods are given ''virtual'' interfaces connected to an internal<br />
** Each nodes has a running network stack<br />
* Kube-proxy runs in the OS to control IPtables for:<br />
** Services<br />
** NodePorts<br />
<br />
;Networking substrate<br />
* Most k8s network stacks allocate subnets for each node<br />
** The network stack is responsible for arbitration of subnets and IPs<br />
** The network stack is also responsible for moving packets around the network<br />
* Pods have a unique, routable IP on the Pod CIDR block<br />
** The CIDR block is ''not'' accessed from outside the k8s cluster<br />
** The magic of IPtables allows the Pods to make outgoing connections<br />
* Ensure that k8s has the correct Pods and Service CIDR blocks<br />
<br />
The Pod network is not seen on the physical network (i.e., it is encapsulated; you will not be able to use <code>tcpdump</code> on it from the physical network)<br />
<br />
;Making the setup easier &mdash; CNI<br />
* Use the Container Network Interface (CNI)<br />
* Relieves k8s from having to have a specific network configuration<br />
* It is activated by supplying <code>--network-plugin=cni, --cni-conf-dir, --cni-bin-dir</code> to kubelet<br />
** Typical configuration directory: <code>/etc/cni/net.d</code><br />
** Typical bin directory: <code>/opt/cni/bin</code><br />
* Allows for multiple backends to be used: linux-bridge, macvlan, ipvlan, Open vSwitch, network stacks<br />
<br />
;Kubernetes services<br />
<br />
* Services are crucial for service discovery and distributing traffic to Pods<br />
* Services act as simple internal load balancers with VIPs<br />
** No access controls<br />
** No traffic controls<br />
* IPtables magically route to virtual IPs<br />
* Internally, Services are used as inter-Pod service discovery<br />
** Kube-DNS publishes DNS record (i.e., <code>nginx.default.svc.cluster.local</code>)<br />
* Services can be exposed in three different ways:<br />
*# ClusterIP<br />
*# LoadBalancer<br />
*# NodePort<br />
<br />
; kube-proxy<br />
* Each k8s node in the cluster runs a kube-proxy<br />
* Two modes: userspace and iptables<br />
** iptables is much more performant (userspace should no longer be used<br />
* kube-proxy has the task of configuring iptables to expose each k8s service<br />
** iptables rules distributes traffic randomly across the endpoints<br />
<br />
===Network providers===<br />
<br />
In order for a CNI plugin to be considered a "[https://kubernetes.io/docs/concepts/cluster-administration/networking/ Network Provider]", it must provide (at the very least) the following:<br />
# All containers can communicate with all other containers without NAT<br />
# All nodes can communicate with all containers (and ''vice versa'') without NAT<br />
# The IP that a containers sees itself as is the same IP that others see it as<br />
<br />
==Linux namespaces==<br />
<br />
* Control group (cgroups)<br />
* Union File Systems<br />
<br />
==Kubernetes inbound node port requirements==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Protocol<br />
!Direction<br />
!Port range<br />
!Purpose<br />
!Used by<br />
!Notes<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Master node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 6443<sup>*</sup> || Kubernetes API server || All<br />
|-<br />
| TCP || Inbound || 2379-2380 || etcd server client API || kube-apiserver, etcd<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10251 || kube-scheduler || Self<br />
|-<br />
| TCP || Inbound || 10252 || kube-controller-manager || Self<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Worker node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 30000-32767 || NodePort Services<sup>**</sup> || All<br />
|}<br />
</div><br />
<br clear="all"/><br />
<sup>**</sup> Default port range for NodePort Services.<br />
<br />
Any port numbers marked with <sup>*</sup> are overridable, so you will need to ensure any custom ports you provide are also open.<br />
<br />
Although etcd ports are included in master nodes, you can also host your own etcd cluster externally or on custom ports.<br />
<br />
The pod network plugin you use (see below) may also require certain ports to be open. Since this differs with each pod network plugin, please see the documentation for the plugins about what port(s) those need.<br />
<br />
==API versions==<br />
<br />
Below is a table showing which value to use for the <code>apiVersion</code> key for a given k8s primitive (note: all values are for k8s 1.8.0, unless otherwise specified):<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Primitive<br />
!apiVersion<br />
|-<br />
| Pod || v1<br />
|-<br />
| Deployment || apps/v1beta2<br />
|-<br />
| Service || v1<br />
|-<br />
| Job || batch/v1<br />
|-<br />
| Ingress || extensions/v1beta1<br />
|-<br />
| CronJob || batch/v1beta1<br />
|-<br />
| ConfigMap || v1<br />
|-<br />
| DaemonSet || apps/v1<br />
|-<br />
| ReplicaSet || apps/v1beta2<br />
|-<br />
| NetworkPolicy || networking.k8s.io/v1<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
You can get a list of all of the API versions supported by your k8s install with:<br />
$ kubectl api-versions<br />
<br />
==Troubleshooting==<br />
<br />
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns<br />
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
* If your container has previously crashed, you can access the previous container’s crash log with:<br />
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}<br />
<br />
==Miscellaneous commands==<br />
<br />
* Simple workflow (not a best practice; use manifest files {YAML} instead):<br />
$ kubectl run nginx --image=nginx:1.10.0<br />
$ kubectl expose deployment nginx --port 80 --type LoadBalancer<br />
$ kubectl get services # <- wait until public IP is assigned<br />
$ kubectl scale deployment nginx --replicas 3<br />
<br />
* Create an Nginx deployment with three replicas without using YAML:<br />
$ kubectl run nginx --image=nginx --replicas=3<br />
<br />
* Take a node out of service for maintenance:<br />
$ kubectl cordon k8s.worker1.local<br />
$ kubectl drain k8s.worker1.local --ignore-daemonsets<br />
<br />
* Return a given node to a service after cordoning and "draining" it (e.g., after a maintenance):<br />
$ kubectl uncordon k8s.worker1.local<br />
<br />
* Get a list of nodes in a format useful for scripting:<br />
$ kubectl get nodes -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get nodes -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get nodes -o json | jq -crM '.items[].metadata.name'<br />
#~OR~ (if using an older version of `jq`)<br />
$ kubectl get nodes -o json | jq '.items[].metadata.name' | tr -d '"'<br />
<br />
* Label a list of nodes:<br />
<pre><br />
for node in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do<br />
kubectl label nodes ${node} instancetype=ondemand;<br />
kubectl label nodes ${node} "example.io/node-lifecycle"=od;<br />
done<br />
</pre><br />
<br />
* Delete a bunch of Pods in "Evicted" state:<br />
$ kubectl get pod -n develop | awk '/Evicted/{print $1}' | xargs kubectl delete pod -n develop<br />
#~OR~<br />
$ kubectl get po -a --all-namespaces -o json | \<br />
jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | <br />
"kubectl delete po \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c<br />
<br />
* Get a random node:<br />
$ NODES=($(kubectl get nodes -o json | jq -crM '.items[].metadata.name'))<br />
$ NUMNODES=${#NODES[@]}<br />
$ echo ${NODES[$[ $RANDOM % $NUMNODES ]]}<br />
<br />
* Get all recent events sorted by their timestamps:<br />
$ kubectl get events --sort-by='.metadata.creationTimestamp'<br />
<br />
* Get a list of all Pods in the default namespace sorted by Node:<br />
$ kubectl get po -o wide --sort-by=.spec.nodeName<br />
<br />
* Get the cluster IP for a service named "foo":<br />
$ kubectl get svc/foo -o jsonpath='{.spec.clusterIP}'<br />
<br />
* List all Services in a cluster and their node ports:<br />
$ kubectl get --all-namespaces svc -o json |\<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Print just the Pod names of those Pods with the label <code>app=nginx</code>:<br />
$ kubectl get --no-headers=true pods -l app=nginx -o custom-columns=:metadata.name<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get --no-headers=true pods -l app=nginx -o name | awk -F "/" '{print $2}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o json | jq -crM '.items [] | .metadata.name'<br />
<br />
* Get a list of all container images used by the Pods in your default namespace:<br />
$ kubectl get pods -o go-template --template='<nowiki>{{range .items}}{{racontainers}}{{.image}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get pods -o go-template="<nowiki>{{range .items}}{{range .spec.containers}}{{.image}}|{{end}}{{end}}</nowiki>" | tr '|' '\n'<br />
<br />
* Get a list of Pods sorted by Node name:<br />
$ kubectl get po -o json | jq -r '.items | sort_by(.spec.nodeName)[] | [.spec.nodeName,.metadata.name] | @tsv'<br />
<br />
* List all Services in a cluster with their endpoints:<br />
$ kubectl get --all-namespaces svc -o json | \<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Get status transitions of each Pod in the default namespace:<br />
$ export tpl='{range .items[*]}{"\n"}{@.metadata.name}{range @.status.conditions[*]}{"\t"}{@.type}={@.status}{end}{end}'<br />
$ kubectl get po -o jsonpath="${tpl}" && echo<br />
<br />
cheddar-cheese-d6d6587c7-4bgcz Initialized=True Ready=True PodScheduled=True<br />
echoserver-55f97d5bff-pdv65 Initialized=True Ready=True PodScheduled=True<br />
stilton-cheese-6d64cbc79-g7h4w Initialized=True Ready=True PodScheduled=True<br />
<br />
* Get a list of all Pods in status "Failed":<br />
$ kubectl get pods -o go-template='<nowiki>{{range .items}}{{if eq .status.phase "Failed"}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get all users in all namespaces:<br />
$ kubectl get rolebindings --all-namepsaces -o go-template \<br />
--template='<nowiki>{{range .items}}{{println}}{{.metadata.namespace}}={{range .subjects}}{{if eq .kind "User"}}{{.name}} {{end}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get the memory limit assigned to a container in a given Pod:<br />
<pre><br />
$ kubectl get pod example-pod-name -n default \<br />
-o jsonpath="{.spec.containers[*].resources.limits}" <br />
</pre><br />
<br />
* Get a Bash prompt of your current context and namespace:<br />
<pre><br />
NORMAL="\[\033[00m\]"<br />
BLUE="\[\033[01;34m\]"<br />
RED="\[\e[1;31m\]"<br />
YELLOW="\[\e[1;33m\]"<br />
GREEN="\[\e[1;32m\]"<br />
PS1_WORKDIR="\w"<br />
PS1_HOSTNAME="\h"<br />
PS1_USER="\u"<br />
<br />
__kube_ps1()<br />
{<br />
CONTEXT=$(kubectl config current-context)<br />
NAMESPACE=$(kubectl config view -o jsonpath="{.contexts[?(@.name==\"${CONTEXT}\")].context.namespace}")<br />
if [ -z "$NAMESPACE"]; then<br />
NAMESPACE="default"<br />
fi<br />
if [ -n "$CONTEXT" ]; then<br />
case "$CONTEXT" in<br />
*prod*)<br />
echo "${RED}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*test*)<br />
echo "${YELLOW}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*)<br />
echo "${GREEN}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
esac<br />
fi<br />
}<br />
<br />
export PROMPT_COMMAND='PS1="${GREEN}${PS1_USER}@${PS1_HOSTNAME}${NORMAL}:$(__kube_ps1)${BLUE}${PS1_WORKDIR}${NORMAL}\$ "'<br />
</pre><br />
<br />
===Client configuration===<br />
<br />
* Setup autocomplete in bash; bash-completion package should be installed first:<br />
$ source <(kubectl completion bash)<br />
<br />
* View Kubernetes config:<br />
$ kubectl config view<br />
<br />
* View specific config items by JSON path:<br />
$ kubectl config view -o jsonpath='{.users[?(@.name == "k8s")].user.password}'<br />
<br />
* Set credentials for foo.kuberntes.com:<br />
$ kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword<br />
<br />
===Viewing / finding resources===<br />
<br />
* List all services in the namespace:<br />
$ kubectl get services<br />
<br />
* List all pods in all namespaces in wide format:<br />
$ kubectl get pods -o wide --all-namespaces<br />
<br />
* List all pods in JSON (or YAML) format:<br />
$ kubectl get pods -o json<br />
<br />
* Describe resource details (node, pod, svc):<br />
$ kubectl describe nodes my-node<br />
<br />
* List services sorted by name:<br />
$ kubectl get services --sort-by=.metadata.name<br />
<br />
* List pods sorted by restart count:<br />
$ kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'<br />
<br />
* Rolling update pods for frontend-v1:<br />
$ kubectl rolling-update frontend-v1 -f frontend-v2.json<br />
<br />
* Scale a ReplicaSet named "foo" to 3:<br />
$ kubectl scale --replicas=3 rs/foo<br />
<br />
* Scale a resource specified in "foo.yaml" to 3:<br />
$ kubectl scale --replicas=3 -f foo.yaml<br />
<br />
* Execute a command in every pod / replica:<br />
$ for i in 0 1; do kubectl exec foo-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done<br />
<br />
* Get a list of ''all'' container IDs running in ''all'' Pods in ''all'' namespaces for a given Kubernetes cluster:<br />
<pre><br />
$ kubectl get pods --all-namespaces \<br />
-o jsonpath='{range .items[*]}{"pod: "}{.metadata.name}{"\n"}{range .status.containerStatuses[*]}{"\tname: "}{.containerID}{"\n\timage: "}{.image}{"\n"}{end}'<br />
<br />
# Example output:<br />
pod: cert-manager-848f547974-8m2k6<br />
name: containerd://358415173310a528a36ca2c19cdc3319f8fd96634c09957977767333b104d387<br />
image: quay.io/jetstack/cert-manager-controller:v1.5.3<br />
</pre><br />
<br />
===Manage resources===<br />
<br />
* Get documentation for pod or service:<br />
$ kubectl explain pods,svc<br />
<br />
* Create resource(s) like pods, services or DaemonSets:<br />
$ kubectl create -f ./my-manifest.yaml<br />
<br />
* Apply a configuration to a resource:<br />
$ kubectl apply -f ./my-manifest.yaml<br />
<br />
* Start a single instance of Nginx:<br />
$ kubectl run nginx --image=nginx<br />
<br />
* Create a secret with several keys:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysecret<br />
type: Opaque<br />
data:<br />
password: $(echo "s33msi4" | base64)<br />
username: $(echo "jane"| base64)<br />
EOF<br />
</pre><br />
<br />
* Delete a resource:<br />
$ kubectl delete -f ./my-manifest.yaml<br />
<br />
===Monitoring and logging===<br />
<br />
* Deploy Heapster from Github repository:<br />
$ kubectl create -f deploy/kube-config/standalone/<br />
<br />
* Show metrics for nodes:<br />
$ kubectl top node<br />
<br />
* Show metrics for pods:<br />
$ kubectl top pod<br />
<br />
* Show metrics for a given pod and its containers:<br />
$ kubectl top pod pod_name --containers<br />
<br />
* Dump pod logs (STDOUT):<br />
$ kubectl logs pod_name<br />
<br />
* Stream pod container logs (STDOUT, multi-container case):<br />
$ kubectl logs -f pod_name -c my-container<br />
<br />
<!-- TODO: https://gist.github.com/so0k/42313dbb3b547a0f51a547bb968696ba --><br />
<br />
===Run tcpdump on containers running in Pods===<br />
<br />
* Find which node/host/IP the Pod in question is running on and also get the container ID:<br />
<pre><br />
$ kubectl describe pod busybox | grep -E "^Node:|Container ID: "<br />
Node: worker2/10.39.32.122<br />
Container ID: docker://a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03<br />
<br />
#~OR~<br />
<br />
$ containerID=$(kubectl get po busybox -o jsonpath='{.status.containerStatuses[*].containerID}' | sed -e 's|docker://||g')<br />
$ hostIP=$(kubectl get po busybox -o jsonpath='{.status.hostIP}')<br />
</pre><br />
<br />
Log into the node/host running the Pod in question and then perform the following steps.<br />
<br />
* Get the virtual interface ID (note it will depend on which Container Network Interface you are using {e.g., veth, cali, etc.}):<br />
<pre><br />
$ docker exec a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03 /bin/sh -c 'cat /sys/class/net/eth0/iflink'<br />
12<br />
<br />
# List all non-virtual interfaces:<br />
$ for iface in $(find /sys/class/net/ -type l ! -lname '*/devices/virtual/net/*' -printf '%f '); do echo "$iface is not virtual"; done<br />
ens192 is not virtual<br />
<br />
# Check if we are using veth or cali or something else:<br />
$ ls -1 /sys/class/net/ | awk '!/docker|lo|ens/{print substr($0,0,4);exit}'<br />
cali<br />
<br />
$ for i in /sys/class/net/veth*/ifindex; do grep -l 12 $i; done<br />
#~OR~<br />
$ for i in /sys/class/net/cali*/ifindex; do grep -l 12 $i; done<br />
/sys/class/net/cali12d4a061371/ifindex<br />
#~OR~<br />
echo $(find /sys/class/net/ -type l -lname '*/devices/virtual/net/*' -exec grep -l 12 {}/ifindex \;) | awk -F'/' '{print $5}'<br />
cali12d4a061371<br />
#~OR~<br />
$ ip link | grep ^12<br />
12: cali12d4a061371@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default<br />
#~OR~<br />
$ ip link | awk '/^12/{print $2}' | awk -F'@' '{print $1}'<br />
cali12d4a061371<br />
</pre><br />
<br />
* Now run [[tcpdump]] on this virtual interface (note: make sure you are running tcpdump on the ''same'' host as the Pod is running on):<br />
$ sudo tcpdump -i cali12d4a061371<br />
<br />
; Self-signed certificates<br />
<br />
If you are using the latest version of <code>kubectl</code> and are running it against a k8s cluster built with a self-signed cert, you can get around any "x509" errors with:<br />
$ export GODEBUG=x509ignoreCN=0<br />
<br />
===API resources===<br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ time for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
<br />
real 1m20.014s<br />
user 0m52.732s<br />
sys 0m17.751s<br />
</pre><br />
<br />
* Note: if you just want a version for a single/given kind:<br />
<pre><br />
$ kubectl explain deploy | head -2<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
</pre><br />
<br />
===kubectl-neat===<br />
<br />
: See: https://github.com/itaysk/kubectl-neat<br />
: See: [[jq]]<br />
<br />
* To easily copy a certificate secret from one namespace to another namespace run:<br />
<pre><br />
$ SOURCE_NAMESPACE=<update-me><br />
$ DESTINATION_NAMESPACE=<update-me><br />
$ kubectl -n ${SOURCE_NAMESPACE} get secret kafka-client-credentials -o json |\<br />
kubectl neat |\<br />
jq 'del(.metadata["namespace"])' |\<br />
kubectl apply -n ${DESTINATION_NAMESPACE} -f -<br />
</pre><br />
<br />
===Get CPU/memory for each node===<br />
<br />
<pre><br />
for node in $(kubectl get nodes -o=jsonpath='{.items[*].metadata.name}'); do<br />
echo "NODE: ${node}"; kubectl describe node ${node} | grep -E '^ cpu |^ memory ';<br />
done<br />
</pre><br />
<br />
===Get vCPU capacity===<br />
<br />
<pre><br />
$ kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"} \<br />
{.status.capacity.cpu}{\"\n\"}{end}"<br />
</pre><br />
<br />
==Miscellaneous examples==<br />
<br />
* Create a Namespace:<br />
<pre><br />
kind: Namespace<br />
apiVersion: v1<br />
metadata:<br />
name: my-namespace<br />
</pre><br />
<br />
; Testing the load balancing capabilities of a Service<br />
<br />
* Create a Deployment with two replicas of Nginx (i.e., 2 x Pods with identical containers, configuration, etc.):<br />
<pre><br />
$ cat << EOF >nginx-deploy.yml<br />
kind: Deployment<br />
apiVersion: apps/v1<br />
metadata:<br />
name: nginx-deploy<br />
spec:<br />
replicas: 2<br />
strategy:<br />
rollingUpdate:<br />
maxSurge: 1<br />
maxUnavailable: 0<br />
type: RollingUpdate<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
$ kubectl create --validate -f nginx-deploy.yml<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deploy 2 2 2 2 1h<br />
$ kubectl get po<br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deploy-8d68fb6cc-bspt8 1/1 Running 1 1h<br />
nginx-deploy-8d68fb6cc-qdvhg 1/1 Running 1 1h<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: nginx-svc<br />
spec:<br />
ports:<br />
- port: 8080<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
<br />
$ kubectl get svc/nginx-svc<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
nginx-svc ClusterIP 10.101.133.100 <none> 8080/TCP 1h<br />
</pre><br />
<br />
* Overwrite the default index.html file (note: This is ''not'' persistent. The original default index.html file will be restored if the Pod fails and the Deployment brings up a new Pod and/or if you modify your Deployment {e.g., upgrade Nginx}. This is just for demonstration purposes):<br />
$ kubectl exec -it nginx-8d68fb6cc-bspt8 -- sh -c 'echo "pod-01" > /usr/share/nginx/html/index.html'<br />
$ kubectl exec -it nginx-8d68fb6cc-qdvhg -- sh -c 'echo "pod-02" > /usr/share/nginx/html/index.html'<br />
<br />
* Get the HTTP status code and server value from the header of a request to the Service endpoint:<br />
$ curl -Is 10.101.133.100:8080 | grep -E '^HTTP|Server'<br />
HTTP/1.1 200 OK<br />
Server: nginx/1.7.9 # <- This is the version of Nginx we defined in the Deployment above<br />
<br />
* Perform a GET request on the Service endpoint (ClusterIP+Port):<br />
<pre><br />
$ for i in $(seq 1 10); do curl -s 10.101.133.100:8080; done<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-02<br />
</pre><br />
Sometimes <code>pod-01</code> responded; sometimes <code>pod-02</code> responded.<br />
<br />
* Perform a GET on the Service endpoint 10,000 times and sum up which Pod responded for each request:<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
5018 pod-01 # <- number of times pod-01 responded to the request<br />
4982 pod-02 # <- number of times pod-02 responded to the request<br />
<br />
real 1m0.639s<br />
user 0m29.808s<br />
sys 0m11.692s<br />
</pre><br />
<br />
$ awk 'BEGIN{print 5018/(5018+4982);}'<br />
0.5018<br />
$ awk 'BEGIN{print 4982/(5018+4982);}'<br />
0.4982<br />
<br />
So, our Service is "load balancing" our two Nginx Pods in a roughly 50/50 fashion.<br />
<br />
In order to double-check that the Service is randomly selecting a Pod to serve the GET request, let's scale our Deployment from 2 to 3 replicas:<br />
$ kubectl scale deploy/nginx-deploy --replicas=3<br />
<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
3392 pod-01<br />
3335 pod-02<br />
3273 pod-03<br />
<br />
real 0m59.537s<br />
user 0m25.932s<br />
sys 0m9.656s<br />
</pre><br />
$ awk 'BEGIN{print 3392/(3392+3335+3273);}'<br />
0.3392<br />
$ awk 'BEGIN{print 3335/(3392+3335+3273);}'<br />
0.3335<br />
$ awk 'BEGIN{print 3273/(3392+3335+3273);}'<br />
0.3273<br />
<br />
Sure enough. Each of the 3 Pods is serving the GET request roughly 33% of the time.<br />
<br />
; Query selections<br />
<br />
* Create a "query selection" file:<br />
<pre><br />
$ cat << EOF >cluster-nodes-health.txt<br />
Name Kernel InternalIP MemoryPressure DiskPressure PIDPressure Ready<br />
.metadata.name .status.nodeInfo.kernelVersion .status.addresses[0].address .status.conditions[0].status .status.conditions[1].status .status.conditions[2].status .status.conditions[3].status<br />
EOF<br />
</pre><br />
<br />
* Use the above "query selection" file:<br />
<pre><br />
$ kubectl get nodes -o custom-columns-file=cluster-nodes-health.txt<br />
Name Kernel InternalIP MemoryPressure DiskPressure PIDPressure Ready<br />
10.10.10.152 5.4.0-1084-aws 10.10.10.152 False False False False<br />
10.10.11.12 5.4.0-1092-aws 10.10.11.12 False False False False<br />
10.10.12.22 5.4.0-1039-aws 10.10.12.22 False False False False<br />
</pre><br />
<br />
==Example YAML files==<br />
<br />
* Basic Pod using busybox:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod using busybox, which also prints out environment variables (including the ones defined in the YAML):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: env-dump<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- env<br />
env:<br />
- name: USERNAME<br />
value: "Christoph"<br />
- name: PASSWORD<br />
value: "mypassword"<br />
</pre><br />
$ kubectl logs env-dump<br />
...<br />
PASSWORD=mypassword<br />
USERNAME=Christoph<br />
...<br />
<br />
* Basic Pod using alpine:<br />
<pre><br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: alpine<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: alpine<br />
image: alpine<br />
command:<br />
- /bin/sh<br />
- "-c"<br />
- "sleep 60m"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod running Nginx:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx-pod<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create a Job that calculates pi up to 2000 decimal places:<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
<br />
* Create a Deployment with two replicas of Nginx running:<br />
<pre><br />
apiVersion: apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
spec:<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
replicas: 2 <br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.9.1<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Create a basic Persistent Volume, which uses NFS:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 172.31.119.58<br />
readOnly: false<br />
</pre><br />
<br />
* Create a Persistent Volume Claim against the above PV:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
</pre><br />
<br />
* Create a Pod using a customer scheduler (i.e., not the default one):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: my-custom-scheduler<br />
annotations:<br />
scheduledBy: custom-scheduler<br />
spec:<br />
schedulerName: custom-scheduler<br />
containers:<br />
- name: pod-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Install k8s cluster manually in the Cloud==<br />
<br />
''Note: For this example, I will be using AWS and I will assume you already have 3 x EC2 instances running CentOS 7 in your AWS account. I will install Kubernetes 1.10.x.''<br />
<br />
* Disable services not supported (yet) by Kubernetes:<br />
$ sudo setenforce 0 # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
<br />
$ sudo systemctl stop firewalld<br />
$ sudo systemctl mask firewalld<br />
$ sudo yum install -y iptables-services<br />
<br />
* Disable swap:<br />
$ sudo swapoff -a # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo vi /etc/fstab # comment out swap line<br />
$ sudo mount -a<br />
<br />
* Make sure routed traffic does not bypass iptables:<br />
$ cat << EOF > /etc/sysctl.d/k8s.conf<br />
net.bridge.bridge-nf-call-ip6tables = 1<br />
net.bridge.bridge-nf-call-iptables = 1<br />
EOF<br />
$ sudo sysctl --system<br />
<br />
* Install <code>kubelet</code>, <code>kubeadm</code>, and <code>kubectl</code> on '''''all''''' nodes in your cluster (both Master and Worker nodes):<br />
<pre><br />
$ cat << EOF > /etc/yum.repos.d/kubernetes.repo<br />
[kubernetes]<br />
name=Kubernetes<br />
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch<br />
enabled=1<br />
gpgcheck=1<br />
repo_gpgcheck=1<br />
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg<br />
EOF<br />
</pre><br />
<br />
$ sudo yum install -y kubelet kubeadm kubectl<br />
$ sudo systemctl enable kubelet && sudo systemctl start kubelet<br />
<br />
* Configure cgroup driver used by kubelet on '''''all''''' nodes (both Master and Worker nodes):<br />
<br />
Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config:<br />
<br />
$ docker info | grep -i cgroup<br />
$ grep -i cgroup /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
If the Docker cgroup driver and the kubelet config do not match, change the kubelet config to match the Docker cgroup driver. The flag you need to change is <code>--cgroup-driver</code>. If it is already set, you can update like so:<br />
<br />
$ sudo sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
Otherwise, you will need to open the systemd file and add the flag to an existing environment line.<br />
<br />
Then restart kubelet:<br />
<br />
$ sudo systemctl daemon-reload<br />
$ sudo systemctl restart kubelet<br />
<br />
* Run <code>kubeadm</code> on Master node:<br />
<br />
K8s requires a pod network to function. We are going to use Flannel, so we need to pass in a flag to the deployment script so k8s knows how to configure itself:<br />
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16<br />
<br />
Note: This command might take a fair amount of time to complete.<br />
<br />
Once it has completed, make note of the "<code>join</code>" command output by <code>kubeadm init</code> that looks something like the following ('''DO NOT RUN THE FOLLOWING COMMAND YET!'''):<br />
# kubeadm join --token --discovery-token-ca-cert-hash sha256:<br />
<br />
You will run that command on the other non-master nodes (aka the "Worker Nodes") to allow them to join the cluster. However, '''do not''' run that command on the worker nodes until you have completed all of the following steps.<br />
<br />
* Create a directory:<br />
$ mkdir -p $HOME/.kube<br />
<br />
* Copy the configuration files to a location usable by the local user:<br />
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config <br />
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config<br />
<br />
* In order for your pods to communicate with one another, you will need to install pod networking. We are going to use Flannel for our Container Network Interface (CNI) because it is easy to install and reliable. <br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</nowiki><br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml</nowiki><br />
<br />
* Make sure everything is coming up properly:<br />
$ kubectl get pods --all-namespaces --watch<br />
Once the <code>kube-dns-xxxx</code> containers are up (i.e., in Status "Running"), your cluster is ready to accept worker nodes.<br />
<br />
* On each of the Worker nodes, run the <code>sudo kubeadm join ...</code> command that <code>kubeadm init</code> created for you (see above).<br />
<br />
* On the Master Node, run the following command:<br />
$ kubectl get nodes --watch<br />
Once the Status of the Worker Nodes returns "Ready", your k8s cluster is ready to use.<br />
<br />
* Example output of successful Kubernetes cluster:<br />
<pre><br />
$ kubectl get nodes<br />
NAME STATUS ROLES AGE VERSION<br />
k8s-01 Ready master 13m v1.10.1<br />
k8s-02 Ready <none> 12m v1.10.1<br />
k8s-03 Ready <none> 12m v1.10.1<br />
</pre><br />
<br />
That's it! You are now ready to start deploying Pods, Deployments, Services, etc. in your Kubernetes cluster!<br />
<br />
==Bash completion==<br />
''Note: The following only works on newer versions. I have tested that this works on version 1.9.1.''<br />
<br />
Add the following line to your <code>~/.bashrc</code> file:<br />
source <(kubectl completion bash)<br />
<br />
==Kubectl plugins==<br />
<br />
SEE: [https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/ Extend kubectl with plugins] for details.<br />
<br />
: FEATURE STATE: Kubernetes v1.11 (alpha)<br />
: FEATURE STATE: Kubernetes v1.15 (stable)<br />
<br />
This section shows you how to install and write extensions for <code>kubectl</code>. Usually called "plugins" or "binary extensions", this feature allows you to extend the default set of commands available in <code>kubectl</code> by adding new sub-commands to perform new tasks and extend the set of features available in the main distribution of <code>kubectl</code>.<br />
<br />
Get code [https://github.com/kubernetes/kubernetes/tree/master/pkg/kubectl/plugins/examples from here].<br />
<br />
<pre><br />
.kube/<br />
└── plugins<br />
└── aging<br />
├── aging.rb<br />
└── plugin.yaml<br />
</pre><br />
<br />
$ chmod 0700 .kube/plugins/aging/aging.rb<br />
<br />
* See options:<br />
<pre><br />
$ kubectl plugin aging --help<br />
Aging shows pods from the current namespace by age.<br />
<br />
Usage:<br />
kubectl plugin aging [flags] [options]<br />
</pre><br />
<br />
* Usage:<br />
<pre><br />
$ kubectl plugin aging<br />
The Magnificent Aging Plugin.<br />
<br />
nginx-deployment-67594d6bf6-5t8m9: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-6kw9j: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-d8dwt: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
</pre><br />
<br />
==Local Kubernetes==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="6" bgcolor="#EFEFEF" | '''Local Kubernetes Comparisons'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Feature<br />
!kind<br />
!k3d<br />
!minikube<br />
!Docker Desktop<br />
!Rancher Desktop<br />
|- <br />
| Free || yes || yes || yes || Personal Small Business* || yes<br />
|--bgcolor="#eeeeee"<br />
| Install || easy || easy || easy || easy || medium (you may encounter odd scenarios)<br />
|-<br />
| Ease of Use || medium || medium || medium || easy || easy<br />
|--bgcolor="#eeeeee"<br />
| Stability || stable || stable || stable || stable || stable<br />
|-<br />
| Cross-platform || yes || yes || yes || yes || yes<br />
|--bgcolor="#eeeeee"<br />
| CI Usage || yes || yes || yes || no || no<br />
|-<br />
| Multiple clusters || yes || yes || yes || no || no<br />
|--bgcolor="#eeeeee"<br />
| Podman support || yes || yes || yes || no || no<br />
|-<br />
| Host volumes mount support || yes || yes || yes (with some performance limitations) || yes || yes (only pre-defined paths)<br />
|--bgcolor="#eeeeee"<br />
| Kubernetes service port-forwarding/mapping || yes || yes || yes || yes || yes<br />
|-<br />
| Pull-through Docker mirror/proxy || yes || yes || no || yes (can reference locally available images) || yes (can reference locally available images)<br />
|--bgcolor="#eeeeee"<br />
| Custom CNI || yes (ex: calico) || yes (ex: flannel) || yes (ex: calico) || no || no<br />
|-<br />
| Features Gates || yes || yes || yes || yes (but not natively; requires hacky setup) || yes (but not natively; requires hacky setup)<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
[https://bmiguel-teixeira.medium.com/local-kubernetes-the-one-above-all-3aedbeb5f3f6 Source]<br />
<br />
==See also==<br />
* [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
* [[Kubernetes/GKE|Google Kubernetes Engine]] (GKE)<br />
* [[Kubernetes/AWS|Kubernetes on AWS]] (EKS)<br />
* [[Kubeless]]<br />
* [[Helm]]<br />
<br />
==External links==<br />
* [http://kubernetes.io/ Official website]<br />
* [https://github.com/kubernetes/kubernetes Kubernetes code] &mdash; via GitHub<br />
===Playgrounds===<br />
* [https://www.katacoda.com/courses/kubernetes/playground Kubernetes Playground]<br />
* [https://labs.play-with-k8s.com Play with k8s]<br />
===Tools===<br />
* [https://github.com/kubernetes/minikube minikube] &mdash; Run Kubernetes locally<br />
* [https://kind.sigs.k8s.io/ kind] &mdash; '''K'''ubernetes '''IN''' '''D'''ocker (local clusters for testing Kubernetes)<br />
* [https://github.com/kubernetes/kops kops] &mdash; Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management<br />
* [https://kubernetes-incubator.github.io/kube-aws kube-aws] &mdash; a command-line tool to create/update/destroy Kubernetes clusters on AWS<br />
* [https://github.com/kubernetes-incubator/kubespray kubespray] &mdash; Deploy a production ready kubernetes cluster<br />
* [https://rook.io/ Rook.io] &mdash; File, Block, and Object Storage Services for your Cloud-Native Environments<br />
===Resources===<br />
* [https://kubernetes.io/docs/getting-started-guides/scratch/ Creating a Custom Cluster from Scratch]<br />
* [https://github.com/kelseyhightower/kubernetes-the-hard-way Kubernetes The Hard Way]<br />
* [http://k8sport.org/ K8sPort]<br />
* [https://k8s.af/ Kubernetes Failure Stories]<br />
<br />
===Training===<br />
* [https://kubernetes.io/training/ Official Kubernetes Training Website]<br />
** Kubernetes and Cloud Native Associate (KCNA)<br />
** Certified Kubernetes Application Developer (CKAD)<br />
** Certified Kubernetes Administrator (CKA)<br />
** Certified Kubernetes Security Specialist (CKS) [note: Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS.]<br />
* [https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals Kubernetes Fundamentals] (LFS258)<br />
** ''[https://www.cncf.io/certification/expert/ Certified Kubernetes Administrator]'' (PKA) certification.<br />
* [https://killer.sh/ CKS / CKA / CKAD Simulator]<br />
* [https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/ 11 Ways (Not) to Get Hacked]<br />
<br />
===Blog posts===<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727 Understanding kubernetes networking: pods] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82 Understanding kubernetes networking: services] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-ingress-1bc341c84078 Understanding kubernetes networking: ingress] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b Kubernetes ConfigMaps and Secrets - Part 1] &mdash; by Sandeep Dinesh, 2017-07-13<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-part-2-3dc37111f0dc Kubernetes ConfigMaps and Secrets - Part 2] &mdash; by Sandeep Dinesh, 2017-08-08<br />
* [https://abhishek-tiwari.com/10-open-source-tools-for-highly-effective-kubernetes-sre-and-ops-teams/ 10 open-source Kubernetes tools for highly effective SRE and Ops Teams]<br />
* [https://www.ianlewis.org/en/tag/kubernetes Series of blog posts about k8s] &mdash; by Ian Lewis<br />
* [https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?] &mdash; by Sandeep Dinesh, 2018-03-11<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:DevOps]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Systemd&diff=8268Systemd2023-04-11T00:06:53Z<p>Christoph: /* Example usage */</p>
<hr />
<div>'''systemd''' is a suite of system management daemons, libraries, and utilities designed as a central management and configuration platform for the Linux computer operating system.<br />
<br />
==Example usage==<br />
<br />
* Check if you are running "init" or "systemd":<br />
$ cat /proc/1/comm<br />
systemd<br />
<br />
* Restart network:<br />
$ systemctl restart network.target<br />
<br />
* stop and mask the firewalld service (see: [[CentOS#Iptables_vs._firewalld|iptables vs. firewalld]]):<br />
<br />
$ systemctl stop firewalld<br />
$ systemctl mask firewalld<br />
<br />
* Get default target:<br />
$ systemctl get-default # note: usually will be "graphical.target" ("runlevel 5")<br />
* Set default target:<br />
$ systemctl set-default multi-user.target ("runlevel 3")<br />
* Change to a new target:<br />
$ systemctl isolate multi-user.target<br />
* Switch to the rescue environment ("runlevel 1")<br />
$ systemctl rescue<br />
* Switch to default target:<br />
$ systemctl default<br />
* Reboot server:<br />
$ systemctl isolate reboot.target<br />
* Power off server:<br />
$ systemctl poweroff<br />
#~or~<br />
$ systemctl isolate shutdown.target<br />
<br />
* Miscellaneous:<br />
$ systemctl list-units<br />
$ systemctl list-units -t service<br />
$ systemctl list-units | grep .service<br />
$ systemctl list-units -t target<br />
$ systemctl list-unit-files<br />
$ systemctl list-unit-files -t target<br />
$ systemctl list-dependencies multi-user.target<br />
$ systemctl [status|stop|enable|disable|restart] ssh.service<br />
$ systemctl is-enabled ssh.service<br />
$ systemctl edit --full docker.service<br />
$ systemctl cat ssh.service<br />
$ systemctl show --property=ExecStart docker.service<br />
$ systemctl is-active docker <br />
active<br />
$ systemctl [reboot|poweroff|suspend]<br />
<br />
* List all loaded units:<br />
$ systemctl list-units -all | grep loaded | awk '{print $1;}'<br />
* List all enabled units:<br />
$ systemctl list-unit-files | grep enabled | awk '{print $1;}'<br />
* List all loaded services:<br />
$ systemctl list-units -all | grep service | grep loaded | awk '{print $1;}'<br />
* List all enabled services:<br />
$ systemctl list-unit-files | grep service | grep enabled | awk '{print $1;}' > enabled.txt<br />
* Find a list of services that are loaded but not enabled:<br />
$ systemctl list-units -all | grep service | grep loaded | awk '{print $1;}' > loaded.txt<br />
$ systemctl list-unit-files | grep service | grep enabled | awk '{print $1;}' > enabled.txt<br />
$ diff -y loaded.txt enabled.txt<br />
# Check for missing ones:<br />
$ diff -y loaded.txt enabled.txt | grep '<'<br />
<br />
* List failed services:<br />
$ systemctl --failed<br />
UNIT LOAD ACTIVE SUB DESCRIPTION<br />
● pollinate.service loaded failed failed Seed the pseudo random number generator on first boot<br />
● vboxadd.service loaded failed failed LSB: VirtualBox Linux Additions kernel modules<br />
LOAD = Reflects whether the unit definition was properly loaded.<br />
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.<br />
SUB = The low-level unit activation state, values depend on unit type.<br />
<br />
* cgroup tree<br />
<pre><br />
$ systemd-cgls<br />
├─1 /sbin/init<br />
├─system.slice<br />
│ ├─dbus.service<br />
│ │ └─776 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation<br />
│ ├─cron.service<br />
│ │ └─692 /usr/sbin/cron -f<br />
...<br />
</pre><br />
<br />
* ps with cgroups:<br />
$ alias psc='ps xawf -eo pid,user,cgroup,args'<br />
<br />
; Edit or create a service<br />
<br />
<pre><br />
$ systemctl edit --force --full my-new.service<br />
[Unit]<br />
Description=Description of my service<br />
After=network-online.target<br />
<br />
[Service]<br />
Type=idle<br />
User=bob<br />
WorkingDirectory=/path/to<br />
ExecStart=/path/to/version1<br />
# Otherwise<br />
ExecStart=/path/to/version1<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</pre><br />
<br />
==Related commands==<br />
===systemd-analyze===<br />
<br />
* Analyze system boot-up performance<br />
$ systemd-analyze<br />
Startup finished in 5.223s (kernel) + 7.781s (userspace) = 13.004s<br />
<br />
# ~OR~<br />
<br />
$ systemd-analyze # AWS EC2 instance (t2-micro):<br />
Startup finished in 1.588s (kernel) + 3.100s (initrd) + 27.516s (userspace) = 32.206s<br />
multi-user.target reached after 11.735s in userspace<br />
<br />
# ~OR~<br />
<br />
$ systemd-analyze # System76 laptop running Ubuntu 18.04:<br />
Startup finished in 3.993s (firmware) + 29.599s (loader) + 5.337s (kernel) + 12.725s (userspace) = 51.655s<br />
graphical.target reached after 12.717s in userspace<br />
<br />
* Plot all dependencies of any unit whose name starts with "avahi-daemon":<br />
$ systemd-analyze dot 'avahi-daemon.*' | dot -Tsvg > avahi.svg<br />
$ eog avahi.svg<br />
<br />
* Plot the dependencies between all known target units:<br />
<br />
$ systemd-analyze dot --to-pattern='*.target' --from-pattern='*.target' | dot -Tsvg > targets.svg<br />
$ eog targets.svg<br />
<br />
===journalctl===<br />
''Note: combine with syslog-ng for backward compatibility.''<br />
<br />
$ journalctl<br />
$ journalctl | grep -Ei 'error|fail'<br />
$ journalctl -b # show only logs from this boot<br />
$ journalctl -b -1 # show only logs from previous boot<br />
$ journalctl -u ssh # show only logs for a given unit<br />
$ journalctl -f # follow (somewhat analogous to `tail -f /var/log/messages`)<br />
$ journalctl -f -u ssh.service # show only logs for ssh unit and follow<br />
<br />
* Show logs for a given date/time period:<br />
$ journalctl -u ssh --since="2014-12-06 23:35:00"<br />
$ journalctl --since "2014-12-06" --until "2014-12-07 03:00"<br />
$ journalctl --since yesterday<br />
$ journalctl --since 03:00 --until "1 hour ago"<br />
$ journalctl -u ssh.service --since="5 minutes ago"<br />
<br />
* Show BIOS boot up sequence:<br />
$ journalctl --no-hostname -o short-monotonic --boot -0<br />
<br />
; Cleanup journal logs (i.e., the self-maintenance method is to vacuum the logs by size or time):<br />
<br />
* Retain only the past two days:<br />
$ journalctl --vacuum-time=2d<br />
<br />
* Retain only the past 500 MB:<br />
$ journalctl --vacuum-size=500M<br />
<br />
For an even more robust cleanup:<br />
$ journalctl --flush --rotate<br />
$ journalctl --vacuum-time=1s<br />
<br />
You can also use the <code>--since</code> argument to filter entries:<br />
<pre><br />
--since "2017-10-14 17:00:00"<br />
--since today<br />
</pre><br />
<br />
Finally, you can set the following in <code>/etc/systemd/journald.conf</code>:<br />
SystemMaxUse=100M<br />
<br />
* Misc:<br />
<pre><br />
$ journalctl --utc -ke<br />
</pre><br />
Where:<br />
:<code>--utc</code>: show time in Coordinated Universal Time (UTC).<br />
:<code>-ke</code>: show only kernel messages and jumps to end of the log.<br />
<br />
See: <code>man journalctl</code> for more information.<br />
<br />
===timedatectl===<br />
''Note: Most of these commands will need to be run as either root or sudo and is only valid for systems using systemd.''<br />
<br />
* List all available timezones on your computer/server:<br />
$ timedatectl list-timezones<br />
<br />
* Set your computer's/server's timezone:<br />
$ timedatectl set-timezone region/timezone<br />
<br />
* For instance, to set your timezone to United States Pacific Time (PST; -8GMT):<br />
$ timedatectl set-timezone America/Vancouver<br />
<br />
Your system will be updated to use the selected timezone. You can verify with:<br />
$ timedatectl<br />
<pre><br />
Local time: Fri, 2012-11-02 09:26:46 CET<br />
Universal time: Fri, 2012-11-02 08:26:46 UTC<br />
RTC time: Fri, 2012-11-02 08:26:45<br />
Timezone: Europe/Warsaw<br />
UTC offset: +0100<br />
NTP enabled: no<br />
NTP synchronized: no<br />
RTC in local TZ: no<br />
DST active: no<br />
Last DST change: CEST → CET, DST became inactive<br />
Sun, 2012-10-28 02:59:59 CEST<br />
Sun, 2012-10-28 02:00:00 CET<br />
Next DST change: CET → CEST, DST will become active<br />
the clock will jump one hour forward<br />
Sun, 2013-03-31 01:59:59 CET<br />
Sun, 2013-03-31 03:00:00 CEST<br />
</pre><br />
<br />
* Enable an NTP daemon (chronyd):<br />
<br />
$ timedatectl set-ntp true<br />
==== AUTHENTICATING FOR org.freedesktop.timedate1.set-ntp ===<br />
Authentication is required to control whether network time synchronization shall be enabled.<br />
Authenticating as: user<br />
Password: ********<br />
==== AUTHENTICATION COMPLETE ===<br />
<br />
$ systemctl status chronyd.service<br />
chronyd.service - NTP client/server<br />
Loaded: loaded (/lib/systemd/system/chronyd.service; enabled)<br />
Active: active (running) since Fri, 2012-11-02 09:36:25 CET; 5s ago<br />
...<br />
<br />
===hostnamectl===<br />
<br />
<code>`hostnamectl`</code> allows you to control the system <code>hostname</code>.<br />
<br />
* Example response from a laptop running Ubuntu:<br />
$ hostnamectl<br />
<pre><br />
Static hostname: my_hostname<br />
Icon name: computer-laptop<br />
Chassis: laptop<br />
Boot ID: ffffffffffffffffffffffffffffffff<br />
Operating System: Ubuntu 14.04.2 LTS<br />
Kernel: Linux 3.13.0-52-generic<br />
Architecture: x86_64<br />
</pre><br />
<br />
* Example response from a [[vagrant]] box running Fedora:<br />
$ hostnamectl<br />
<pre><br />
Static hostname: localhost.localdomain<br />
Icon name: computer-vm<br />
Chassis: vm<br />
Machine ID: ffffffffffffffffffffffffffffffff<br />
Boot ID: ffffffffffffffffffffffffffffffff<br />
Virtualization: oracle<br />
Operating System: Fedora 22 (Twenty Two)<br />
CPE OS Name: cpe:/o:fedoraproject:fedora:22<br />
Kernel: Linux 4.0.4-303.fc22.x86_64<br />
Architecture: x86-64<br />
</pre><br />
<br />
* Example response from an AWS EC2 instance running CentOS 7:<br />
$ hostnamectl<br />
<pre><br />
Static hostname: ip-172-22-1-210.us-west-2.compute.internal<br />
Transient hostname: ip-172-22-1-210<br />
Icon name: computer-vm<br />
Chassis: vm<br />
Machine ID: f32e0af35337b5dfcbedcb0d1de8dca1<br />
Boot ID: ea5461881a264a88abe239b2337169bf<br />
Virtualization: xen<br />
Operating System: CentOS Linux 7 (Core)<br />
CPE OS Name: cpe:/o:centos:centos:7<br />
Kernel: Linux 3.10.0-327.10.1.el7.x86_64<br />
Architecture: x86-64<br />
</pre><br />
<br />
* Change the hostname:<br />
$ sudo hostnamectl set-hostname <new-hostname><br />
$ vi /etc/hosts<br />
127.0.0.1 localhost<br />
127.0.1.1 <new-hostname><br />
<br />
* Other ways to change the hostname:<br />
$ sudo hostnamectl --transient set-hostname $hostname<br />
$ sudo hostnamectl --static set-hostname $hostname<br />
$ sudo hostnamectl --pretty set-hostname $hostname<br />
<br />
===Systemd timers===<br />
<br />
* List all timers on current host:<br />
<pre><br />
$ systemctl status *timer<br />
● apt-daily.timer - Daily apt download activities<br />
Loaded: loaded (/lib/systemd/system/apt-daily.timer; enabled; vendor preset: enabled)<br />
Active: active (waiting) since Sun 2021-08-08 04:57:00 PDT; 1 weeks 2 days ago<br />
Trigger: Wed 2021-08-18 04:19:59 PDT; 14h left<br />
...<br />
</pre><br />
<br />
* Check status of a given timer:<br />
<pre><br />
$ systemctl status motd-news.timer<br />
● motd-news.timer - Message of the Day<br />
Loaded: loaded (/lib/systemd/system/motd-news.timer; enabled; vendor preset: enabled)<br />
Active: active (waiting) since Sun 2021-08-08 04:57:00 PDT; 1 weeks 2 days ago<br />
Trigger: Wed 2021-08-18 04:44:11 PDT; 14h left<br />
</pre><br />
<br />
* Check the journal for a given timer:<br />
<pre><br />
$ journalctl -S today -u apt-daily-upgrade.timer<br />
-- Logs begin at Fri 2018-07-13 11:54:18 PDT, end at Tue 2021-08-17 14:20:37 PDT. --<br />
-- No entries --<br />
</pre><br />
<br />
* Useful for setting timers:<br />
<pre><br />
$ systemd-analyze calendar 2030-08-17<br />
Original form: 2030-08-17<br />
Normalized form: 2030-08-17 00:00:00<br />
Next elapse: Sat 2030-08-17 00:00:00 PDT<br />
(in UTC): Sat 2030-08-17 07:00:00 UTC<br />
From now: 8 years 11 months left<br />
<br />
$ systemd-analyze calendar 2030-08-17 20:10:12<br />
Original form: 2030-08-17<br />
Normalized form: 2030-08-17 00:00:00<br />
Next elapse: Sat 2030-08-17 00:00:00 PDT<br />
(in UTC): Sat 2030-08-17 07:00:00 UTC<br />
From now: 8 years 11 months left<br />
<br />
Original form: 20:10:12<br />
Normalized form: *-*-* 20:10:12<br />
Next elapse: Tue 2021-08-17 20:10:12 PDT<br />
(in UTC): Wed 2021-08-18 03:10:12 UTC<br />
From now: 5h 46min left<br />
</pre><br />
<br />
<!-- TODO: https://opensource.com/article/20/7/systemd-timers --><br />
<br />
===Other===<br />
''See: [http://www.freedesktop.org/software/systemd/man/ for a complete list]<br />
<br />
* Control the system locale and keyboard layout settings:<br />
$ localectl <br />
System Locale: LANG=en_US.UTF-8<br />
VC Keymap: n/a<br />
X11 Layout: us<br />
X11 Model: pc105<br />
<br />
$ loginctl # Control the systemd login manager<br />
$ busctl # Introspect the bus<br />
$ machinectl # Control the systemd machine manager<br />
$ networkctl # Query the status of network links<br />
$ systemd-cgls # Recursively show control group contents<br />
$ systemd-cgtop # Show top control groups by their resource usage<br />
$ systemd-path # List and query system and user paths<br />
<br />
* List content of an initramfs image:<br />
$ lsinitramfs /boot/initrd.img-$(uname -r) | less<br />
<br />
==External links==<br />
* [http://freedesktop.org/wiki/Software/systemd/ Official website]<br />
* Full systemd documentation can be found by running <code>man 5 systemd.unit</code><br />
* [https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs How To Use Journalctl to View and Manipulate Systemd Logs]<br />
* [https://tlhp.cf/lennart-poettering-su/ Explanation of `machinectl`] &mdash; the `su` replacement on systemd<br />
<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Systemd&diff=8267Systemd2023-04-10T23:28:34Z<p>Christoph: /* External links */</p>
<hr />
<div>'''systemd''' is a suite of system management daemons, libraries, and utilities designed as a central management and configuration platform for the Linux computer operating system.<br />
<br />
==Example usage==<br />
<br />
* Check if you are running "init" or "systemd":<br />
$ cat /proc/1/comm<br />
systemd<br />
<br />
* Restart network:<br />
$ systemctl restart network.target<br />
<br />
* stop and mask the firewalld service (see: [[CentOS#Iptables_vs._firewalld|iptables vs. firewalld]]):<br />
<br />
$ systemctl stop firewalld<br />
$ systemctl mask firewalld<br />
<br />
* Miscellaneous:<br />
$ systemctl list-units<br />
$ systemctl list-units -t service<br />
$ systemctl list-units | grep .service<br />
$ systemctl list-units -t target<br />
$ systemctl list-unit-files<br />
$ systemctl list-unit-files -t target<br />
$ systemctl list-dependencies multi-user.target<br />
$ systemctl [status|stop|enable|disable|restart] ssh.service<br />
$ systemctl is-enabled ssh.service<br />
$ systemctl edit --full docker.service<br />
$ systemctl cat ssh.service<br />
$ systemctl show --property=ExecStart docker.service<br />
$ systemctl is-active docker <br />
active<br />
$ systemctl [reboot|poweroff|suspend]<br />
<br />
* List all loaded units:<br />
$ systemctl list-units -all | grep loaded | awk '{print $1;}'<br />
* List all enabled units:<br />
$ systemctl list-unit-files | grep enabled | awk '{print $1;}'<br />
* List all loaded services:<br />
$ systemctl list-units -all | grep service | grep loaded | awk '{print $1;}'<br />
* List all enabled services:<br />
$ systemctl list-unit-files | grep service | grep enabled | awk '{print $1;}' > enabled.txt<br />
* Find a list of services that are loaded but not enabled:<br />
$ systemctl list-units -all | grep service | grep loaded | awk '{print $1;}' > loaded.txt<br />
$ systemctl list-unit-files | grep service | grep enabled | awk '{print $1;}' > enabled.txt<br />
$ diff -y loaded.txt enabled.txt<br />
# Check for missing ones:<br />
$ diff -y loaded.txt enabled.txt | grep '<'<br />
<br />
* List failed services:<br />
$ systemctl --failed<br />
UNIT LOAD ACTIVE SUB DESCRIPTION<br />
● pollinate.service loaded failed failed Seed the pseudo random number generator on first boot<br />
● vboxadd.service loaded failed failed LSB: VirtualBox Linux Additions kernel modules<br />
LOAD = Reflects whether the unit definition was properly loaded.<br />
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.<br />
SUB = The low-level unit activation state, values depend on unit type.<br />
<br />
* cgroup tree<br />
<pre><br />
$ systemd-cgls<br />
├─1 /sbin/init<br />
├─system.slice<br />
│ ├─dbus.service<br />
│ │ └─776 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation<br />
│ ├─cron.service<br />
│ │ └─692 /usr/sbin/cron -f<br />
...<br />
</pre><br />
<br />
* ps with cgroups:<br />
$ alias psc='ps xawf -eo pid,user,cgroup,args'<br />
<br />
; Edit or create a service<br />
<br />
<pre><br />
$ systemctl edit --force --full my-new.service<br />
[Unit]<br />
Description=Description of my service<br />
After=network-online.target<br />
<br />
[Service]<br />
Type=idle<br />
User=bob<br />
WorkingDirectory=/path/to<br />
ExecStart=/path/to/version1<br />
# Otherwise<br />
ExecStart=/path/to/version1<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</pre><br />
<br />
==Related commands==<br />
===systemd-analyze===<br />
<br />
* Analyze system boot-up performance<br />
$ systemd-analyze<br />
Startup finished in 5.223s (kernel) + 7.781s (userspace) = 13.004s<br />
<br />
# ~OR~<br />
<br />
$ systemd-analyze # AWS EC2 instance (t2-micro):<br />
Startup finished in 1.588s (kernel) + 3.100s (initrd) + 27.516s (userspace) = 32.206s<br />
multi-user.target reached after 11.735s in userspace<br />
<br />
# ~OR~<br />
<br />
$ systemd-analyze # System76 laptop running Ubuntu 18.04:<br />
Startup finished in 3.993s (firmware) + 29.599s (loader) + 5.337s (kernel) + 12.725s (userspace) = 51.655s<br />
graphical.target reached after 12.717s in userspace<br />
<br />
* Plot all dependencies of any unit whose name starts with "avahi-daemon":<br />
$ systemd-analyze dot 'avahi-daemon.*' | dot -Tsvg > avahi.svg<br />
$ eog avahi.svg<br />
<br />
* Plot the dependencies between all known target units:<br />
<br />
$ systemd-analyze dot --to-pattern='*.target' --from-pattern='*.target' | dot -Tsvg > targets.svg<br />
$ eog targets.svg<br />
<br />
===journalctl===<br />
''Note: combine with syslog-ng for backward compatibility.''<br />
<br />
$ journalctl<br />
$ journalctl | grep -Ei 'error|fail'<br />
$ journalctl -b # show only logs from this boot<br />
$ journalctl -b -1 # show only logs from previous boot<br />
$ journalctl -u ssh # show only logs for a given unit<br />
$ journalctl -f # follow (somewhat analogous to `tail -f /var/log/messages`)<br />
$ journalctl -f -u ssh.service # show only logs for ssh unit and follow<br />
<br />
* Show logs for a given date/time period:<br />
$ journalctl -u ssh --since="2014-12-06 23:35:00"<br />
$ journalctl --since "2014-12-06" --until "2014-12-07 03:00"<br />
$ journalctl --since yesterday<br />
$ journalctl --since 03:00 --until "1 hour ago"<br />
$ journalctl -u ssh.service --since="5 minutes ago"<br />
<br />
* Show BIOS boot up sequence:<br />
$ journalctl --no-hostname -o short-monotonic --boot -0<br />
<br />
; Cleanup journal logs (i.e., the self-maintenance method is to vacuum the logs by size or time):<br />
<br />
* Retain only the past two days:<br />
$ journalctl --vacuum-time=2d<br />
<br />
* Retain only the past 500 MB:<br />
$ journalctl --vacuum-size=500M<br />
<br />
For an even more robust cleanup:<br />
$ journalctl --flush --rotate<br />
$ journalctl --vacuum-time=1s<br />
<br />
You can also use the <code>--since</code> argument to filter entries:<br />
<pre><br />
--since "2017-10-14 17:00:00"<br />
--since today<br />
</pre><br />
<br />
Finally, you can set the following in <code>/etc/systemd/journald.conf</code>:<br />
SystemMaxUse=100M<br />
<br />
* Misc:<br />
<pre><br />
$ journalctl --utc -ke<br />
</pre><br />
Where:<br />
:<code>--utc</code>: show time in Coordinated Universal Time (UTC).<br />
:<code>-ke</code>: show only kernel messages and jumps to end of the log.<br />
<br />
See: <code>man journalctl</code> for more information.<br />
<br />
===timedatectl===<br />
''Note: Most of these commands will need to be run as either root or sudo and is only valid for systems using systemd.''<br />
<br />
* List all available timezones on your computer/server:<br />
$ timedatectl list-timezones<br />
<br />
* Set your computer's/server's timezone:<br />
$ timedatectl set-timezone region/timezone<br />
<br />
* For instance, to set your timezone to United States Pacific Time (PST; -8GMT):<br />
$ timedatectl set-timezone America/Vancouver<br />
<br />
Your system will be updated to use the selected timezone. You can verify with:<br />
$ timedatectl<br />
<pre><br />
Local time: Fri, 2012-11-02 09:26:46 CET<br />
Universal time: Fri, 2012-11-02 08:26:46 UTC<br />
RTC time: Fri, 2012-11-02 08:26:45<br />
Timezone: Europe/Warsaw<br />
UTC offset: +0100<br />
NTP enabled: no<br />
NTP synchronized: no<br />
RTC in local TZ: no<br />
DST active: no<br />
Last DST change: CEST → CET, DST became inactive<br />
Sun, 2012-10-28 02:59:59 CEST<br />
Sun, 2012-10-28 02:00:00 CET<br />
Next DST change: CET → CEST, DST will become active<br />
the clock will jump one hour forward<br />
Sun, 2013-03-31 01:59:59 CET<br />
Sun, 2013-03-31 03:00:00 CEST<br />
</pre><br />
<br />
* Enable an NTP daemon (chronyd):<br />
<br />
$ timedatectl set-ntp true<br />
==== AUTHENTICATING FOR org.freedesktop.timedate1.set-ntp ===<br />
Authentication is required to control whether network time synchronization shall be enabled.<br />
Authenticating as: user<br />
Password: ********<br />
==== AUTHENTICATION COMPLETE ===<br />
<br />
$ systemctl status chronyd.service<br />
chronyd.service - NTP client/server<br />
Loaded: loaded (/lib/systemd/system/chronyd.service; enabled)<br />
Active: active (running) since Fri, 2012-11-02 09:36:25 CET; 5s ago<br />
...<br />
<br />
===hostnamectl===<br />
<br />
<code>`hostnamectl`</code> allows you to control the system <code>hostname</code>.<br />
<br />
* Example response from a laptop running Ubuntu:<br />
$ hostnamectl<br />
<pre><br />
Static hostname: my_hostname<br />
Icon name: computer-laptop<br />
Chassis: laptop<br />
Boot ID: ffffffffffffffffffffffffffffffff<br />
Operating System: Ubuntu 14.04.2 LTS<br />
Kernel: Linux 3.13.0-52-generic<br />
Architecture: x86_64<br />
</pre><br />
<br />
* Example response from a [[vagrant]] box running Fedora:<br />
$ hostnamectl<br />
<pre><br />
Static hostname: localhost.localdomain<br />
Icon name: computer-vm<br />
Chassis: vm<br />
Machine ID: ffffffffffffffffffffffffffffffff<br />
Boot ID: ffffffffffffffffffffffffffffffff<br />
Virtualization: oracle<br />
Operating System: Fedora 22 (Twenty Two)<br />
CPE OS Name: cpe:/o:fedoraproject:fedora:22<br />
Kernel: Linux 4.0.4-303.fc22.x86_64<br />
Architecture: x86-64<br />
</pre><br />
<br />
* Example response from an AWS EC2 instance running CentOS 7:<br />
$ hostnamectl<br />
<pre><br />
Static hostname: ip-172-22-1-210.us-west-2.compute.internal<br />
Transient hostname: ip-172-22-1-210<br />
Icon name: computer-vm<br />
Chassis: vm<br />
Machine ID: f32e0af35337b5dfcbedcb0d1de8dca1<br />
Boot ID: ea5461881a264a88abe239b2337169bf<br />
Virtualization: xen<br />
Operating System: CentOS Linux 7 (Core)<br />
CPE OS Name: cpe:/o:centos:centos:7<br />
Kernel: Linux 3.10.0-327.10.1.el7.x86_64<br />
Architecture: x86-64<br />
</pre><br />
<br />
* Change the hostname:<br />
$ sudo hostnamectl set-hostname <new-hostname><br />
$ vi /etc/hosts<br />
127.0.0.1 localhost<br />
127.0.1.1 <new-hostname><br />
<br />
* Other ways to change the hostname:<br />
$ sudo hostnamectl --transient set-hostname $hostname<br />
$ sudo hostnamectl --static set-hostname $hostname<br />
$ sudo hostnamectl --pretty set-hostname $hostname<br />
<br />
===Systemd timers===<br />
<br />
* List all timers on current host:<br />
<pre><br />
$ systemctl status *timer<br />
● apt-daily.timer - Daily apt download activities<br />
Loaded: loaded (/lib/systemd/system/apt-daily.timer; enabled; vendor preset: enabled)<br />
Active: active (waiting) since Sun 2021-08-08 04:57:00 PDT; 1 weeks 2 days ago<br />
Trigger: Wed 2021-08-18 04:19:59 PDT; 14h left<br />
...<br />
</pre><br />
<br />
* Check status of a given timer:<br />
<pre><br />
$ systemctl status motd-news.timer<br />
● motd-news.timer - Message of the Day<br />
Loaded: loaded (/lib/systemd/system/motd-news.timer; enabled; vendor preset: enabled)<br />
Active: active (waiting) since Sun 2021-08-08 04:57:00 PDT; 1 weeks 2 days ago<br />
Trigger: Wed 2021-08-18 04:44:11 PDT; 14h left<br />
</pre><br />
<br />
* Check the journal for a given timer:<br />
<pre><br />
$ journalctl -S today -u apt-daily-upgrade.timer<br />
-- Logs begin at Fri 2018-07-13 11:54:18 PDT, end at Tue 2021-08-17 14:20:37 PDT. --<br />
-- No entries --<br />
</pre><br />
<br />
* Useful for setting timers:<br />
<pre><br />
$ systemd-analyze calendar 2030-08-17<br />
Original form: 2030-08-17<br />
Normalized form: 2030-08-17 00:00:00<br />
Next elapse: Sat 2030-08-17 00:00:00 PDT<br />
(in UTC): Sat 2030-08-17 07:00:00 UTC<br />
From now: 8 years 11 months left<br />
<br />
$ systemd-analyze calendar 2030-08-17 20:10:12<br />
Original form: 2030-08-17<br />
Normalized form: 2030-08-17 00:00:00<br />
Next elapse: Sat 2030-08-17 00:00:00 PDT<br />
(in UTC): Sat 2030-08-17 07:00:00 UTC<br />
From now: 8 years 11 months left<br />
<br />
Original form: 20:10:12<br />
Normalized form: *-*-* 20:10:12<br />
Next elapse: Tue 2021-08-17 20:10:12 PDT<br />
(in UTC): Wed 2021-08-18 03:10:12 UTC<br />
From now: 5h 46min left<br />
</pre><br />
<br />
<!-- TODO: https://opensource.com/article/20/7/systemd-timers --><br />
<br />
===Other===<br />
''See: [http://www.freedesktop.org/software/systemd/man/ for a complete list]<br />
<br />
* Control the system locale and keyboard layout settings:<br />
$ localectl <br />
System Locale: LANG=en_US.UTF-8<br />
VC Keymap: n/a<br />
X11 Layout: us<br />
X11 Model: pc105<br />
<br />
$ loginctl # Control the systemd login manager<br />
$ busctl # Introspect the bus<br />
$ machinectl # Control the systemd machine manager<br />
$ networkctl # Query the status of network links<br />
$ systemd-cgls # Recursively show control group contents<br />
$ systemd-cgtop # Show top control groups by their resource usage<br />
$ systemd-path # List and query system and user paths<br />
<br />
* List content of an initramfs image:<br />
$ lsinitramfs /boot/initrd.img-$(uname -r) | less<br />
<br />
==External links==<br />
* [http://freedesktop.org/wiki/Software/systemd/ Official website]<br />
* Full systemd documentation can be found by running <code>man 5 systemd.unit</code><br />
* [https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs How To Use Journalctl to View and Manipulate Systemd Logs]<br />
* [https://tlhp.cf/lennart-poettering-su/ Explanation of `machinectl`] &mdash; the `su` replacement on systemd<br />
<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:World_Travels&diff=8266Category:World Travels2023-03-21T10:27:06Z<p>Christoph: /* Europe */</p>
<hr />
<div>[[File:World travels-2021-09-10.png|thumb|World Travels]]<br />
[[File:World travels-2020-06-22.png|thumb|World Travels]]<br />
[[Image:World Travels - Google Maps.png|thumb|[http://www.christophchamp.com/cv/travels/ Google Maps]]]<br />
I have travelled the world quite a lot in my life. I love to travel and usually go out of my way to visit a new country or place. I call this collective habit my "'''World Travels'''".<br />
<br />
== Definitions ==<br />
First, let me define how I categorise the countries I have been to. This is necessary because I have stayed for variable lengths of time in many countries. It is a somewhat arbitrary distinction, however, it will help me categorise my World Travels.<br />
<br />
Below is how I define my stay in countries:<br />
; [[:Category:Countries I have lived in|Countries I have lived in]] : any country I have stayed in for at least ''six months''<br />
; [[:Category:Countries I have stayed in|Countries I have stayed in]] : any country I have stayed in for at least ''one month'' but less than ''six months''<br />
; [[:Category:Countries I have visited|Countries I have visited]] : any country I have stayed in for at least ''one week'' but less than ''one month''<br />
; [[:Category:Countries I have travelled to or through|Countries I have travelled to or through]] : any country I have stayed in for less than ''one week''<br />
<br />
==Countries==<br />
Below is an alphabetical list of the {{countries}} '''countries''' I have lived in or have travelled to so far (a work in progress). Click on the individual countries (where link present) for more information on that particular country.<br />
<br />
===Asia===<br />
# '''Japan''': Tokyo and Chiba; September 1995—June 1996 <br />
# '''Taiwan''': Taipei, December 1995<br />
<br />
===Europe===<br />
# '''Albania''': 1987<br />
# '''Austria''': Vienna (''Wien''), Salzburg, Graz, Innsbruck, Villach, Linz, Feldkirch; '''Home country'''<br />
# '''Belarus''': Minsk, Brest; 1985, June 1994<br />
# '''Belgium''': 1988, 2018<br />
# '''Bosnia i Herzegovina''': Sarajevo, Mostar, Banja Luka; 1987, 1988, 2018<br />
# '''Bulgaria''': Sofia; 1988<br />
# '''Croatia''': Zagreb, Rijeka, Vukovar, Šibenik, Split, Dubrovnik; 1984, 1988, 1995, 1996, 2018<br />
# '''Czech Republic''': 1985, 1994<br />
# '''Denmark''': Copenhagen (''København''), Århus, Roskilde; August 1997, 2005-2006, 2019<br />
# '''Estonia''': Tallinn; December 2019, January 2020<br />
# '''Finland''': Helsinki; December 2019, January 2020 <br />
# '''France''': Paris, Strasbourg, Nice, Beausoleil, Antibes, Cannes, Toulon, Marseille, Nimes, Montpellier, Perpignan; 1988, 1993, 2006, 2016<br />
# '''Germany''': (All 16 Bundesländer / federated states); Hamburg, Berlin, München, Köln, Stuttgart, Kassel, Frankfurt; 1984, 1988, 1996, 1997, 2006, 2021<br />
#* Baden-Württemberg<br />
#* Bavaria (''Bayern'')<br />
#* Berlin<br />
#* Brandenburg<br />
#* Bremen<br />
#* Hamburg<br />
#* Hesse (''Hessen'')<br />
#* Lower Saxony (''Niedersachsen'')<br />
#* Mecklenburg-Western Pomerania (''Mecklenburg-Vorpommern'')<br />
#* North Rhine-Westphalia (''Nordrhein-Westfalen'')<br />
#* Rhineland-Palatinate (''Rheinland-Pfalz'')<br />
#* Saarland<br />
#* Saxony (''Sachsen'')<br />
#* Saxony-Anhalt (''Sachsen-Anhalt'')<br />
#* Schleswig-Holstein<br />
#* Thuringia (''Thüringen'')<br />
# '''Greece''': 1987<br />
# '''[[Hungary]]''': Budapest, Esztergom, Győr, Miskolc, Szeged, Debrecen, Sopron, Szombathely, Székesfehérvár, Balaton, Nagykanizsa, Zalaegerszeg, Pécs; '''Home country'''<br />
# '''Iceland''': Reykjavik, Ólafsvík, Selfoss; 2017, 2018<br />
# '''Ireland''': Dublin, Belfast, Cork, Galway, Sligo; December 1999—January 2000 <br />
# '''Italy''': Rome, Milano, Venice, Trieste, Padua, San Remo, Genova, Verona; 1988, 1997, 2005, 2006<br />
# '''Kosovo''': Priština; 1987, 1988<br />
# '''Latvia''': Riga; January 1994<br />
# '''Liechtenstein''': Vaduz; August 1996<br />
# '''Lithuania''': Vilnius, Kaunas; January 1994<br />
# '''Luxembourg''': 1988<br />
# '''Moldova''': Chișinău; November 1985<br />
# '''Monaco''': Monaco-Ville, La Condamine, Monte Carlo, Fontvieille, Moneghetti, Larvotto - Tenao, and Saint Roman; February 2006-June 2006.<br />
# '''Montenegro''' ('''Crna Gora'''): 1987<br />
# '''North Macedonia''': Skopje, Ohrid; 1987<br />
# '''Netherlands, The''': Amsterdam, Rotterdam; 1984, 1988, December 1999, April 2006, March 2015, November 2016, September 2022, October 2022, November 2022, December 2022, January 2023, February 2023, March 2023<br />
# '''Norway''': Oslo, Bergen, Trondheim; August 1997<br />
#* Oslo<br />
#* Viken<br />
#* Vestland<br />
#* Innlandet<br />
#* Trøndelag<br />
# '''Poland''': Warsaw, Katowice, Kraków, Łódź, Białystok; 1985, December 1993—June 1994, September 1994<br />
# '''Portugal''': Lisbon, Porto; November 2016<br />
# '''Romania''': November 1985<br />
# '''Russia''': Moscow; September 1985—November 1985, June 1994—September 1994<br />
# '''Serbia''' (including ''Vojvodina'', ''Kosovo'', and ''Metohija''): 1987<br />
# '''Slovakia''': Bratislava; 1984, 1993, 1994<br />
# '''Slovenia''': Ljubljana, Domžale, Postojna, Novo Mesto, Kranj, Bled, Jesenice, Kranjska Gora, Trenta, Soča, Celje, Dravograd, Maribor, Murska Sobota, Koper, Izola, Piran, Portorož, Nova Gorica; 1984, 1988, 1995, 1996-1997, April 2006<br />
# '''Spain''': Barcelona; April 2006<br />
# '''Sweden''': Stockholm, Malmö, Göteborg, Uppsala, Östersund, Umeå, Luleå, Kiruna, Abisko; August 1997, January 2020<br />
# '''Switzerland''': Zürich, Zug, Lugano, Lucerne, Winterthur, Interlaken, Bern, Basel; 1988, August 1996<br />
# '''Ukraine''': Kyiv, Odessa; November 1985, December 2021<br />
# '''United Kingdom''': England (London), Northern Ireland (Belfast); January 2000, June 2006<br />
# [Former] '''Czechoslovakia''': 1985<br />
# [Former] '''USSR''': 1985<br />
#* Byelorussian SSR<br />
#* Russian SFSR<br />
#* Ukrainian SSR<br />
# [Former] '''Yugoslavia''': 1984, 1985, 1987, 1988<br />
#* Socialist Republic of Slovenia<br />
#* Socialist Republic of Croatia<br />
#* Socialist Republic of Serbia (SAP Vojvodina and SAP Kosovo)<br />
#* Socialist Republic of Bosnia and Herzegovina<br />
#* Socialist Republic of Montenegro<br />
#* Socialist Republic of Macedonia<br />
<br />
===North America===<br />
# '''Canada''': 2001, 2018<br />
#* Quebec<br />
#* Ontario<br />
#* Manitoba<br />
#* Saskatchewan<br />
#* Alberta<br />
#* British Columbia<br />
# '''Mexico''': 1999<br />
# '''[[United States of America]]''' (all 50 US states)<br />
<br />
===Central America===<br />
# '''Costa Rica''': San Jose, Tamarindo; April 2014<br />
<br />
===South America===<br />
# '''Bolivia''': La Paz, Cochabamba; 1991<br />
# '''Chile''': Arica, Iquique; 1991<br />
# '''Colombia''': Ipiales; 1992<br />
# '''Ecuador''': Quito, Guayaquil, Salinas, Cuenca, Ambato, Riobamba, Ibarra, Machala; 1992<br />
# '''Peru''': Lima, Arequipa, Tumbes, Piura, Trujillo, Tacna, Nazca, Juliaca; 1989—1992<br />
# '''Venezuela''': Caracas; 1992<br />
<br />
==Traveler's Century Club==<br />
<br />
See [https://travelerscenturyclub.org/countries-and-territories here] for details.<br />
<br />
# Alaska<br />
# Albania<br />
# Austria<br />
# Belarus<br />
# Belgium<br />
# Bolivia<br />
# Bosnia & Herzegovina<br />
# Bulgaria<br />
# Canada<br />
# Chile<br />
# Colombia<br />
# Costa Rica<br />
# Croatia<br />
# Czech Republic<br />
# Denmark<br />
# Ecuador<br />
# England<br />
# Estonia<br />
# Finland<br />
# France<br />
# Germany<br />
# Greece<br />
# Hungary<br />
# Iceland<br />
# Ireland (Eire)<br />
# Ireland, Northern<br />
# Italy<br />
# Japan<br />
# Kaliningrad<br />
# Kosovo<br />
# Latvia<br />
# Liechtenstein<br />
# Lithuania<br />
# Luxembourg<br />
# Mexico<br />
# Moldova<br />
# Monaco<br />
# Montenegro<br />
# Netherlands<br />
# North Macedonia<br />
# Norway<br />
# Peru<br />
# Poland<br />
# Portugal<br />
# Romania<br />
# Russia<br />
# Serbia<br />
# Slovakia<br />
# Slovenia<br />
# Spain<br />
# Srpska<br />
# Sweden<br />
# Switzerland<br />
# Taiwan<br />
# Transnistria (Pridnestrovie)<br />
# Ukraine<br />
# United States (Contiguous)<br />
# Venezuela<br />
<br />
==Global cities==<br />
Below is a list of the [[wikipedia:Global city|Global cities]] I have been to (and I have lived in some of these):<br />
<br />
===Alpha world cities (full service world cities)===<br />
<br />
'''12 points:'''<br />
*London<br />
*New York City<br />
*Paris<br />
*Tokyo<br />
<br />
'''10 points:''' <br />
{|<br />
| valign="top" |<br />
*Chicago<br />
*Frankfurt<br />
| valign="top" |<br />
*Los Angeles<br />
*Milano<br />
|}<br />
<br />
===Beta world cities (major world cities)===<br />
'''9 points:''' <br />
<br />
*Toronto<br />
*Zürich<br />
<br />
'''8 points:'''<br />
<br />
*Brussels<br />
*Mexico City<br />
<br />
'''7 points:''' <br />
<br />
*Moscow<br />
<br />
===Gamma world cities (minor world cities)===<br />
'''6 points''' <br />
*Amsterdam<br />
*Boston<br />
*Caracas<br />
*Dallas<br />
*Düsseldorf<br />
*Houston<br />
*Prague<br />
*Taipei<br />
*Washington, D.C.<br />
<br />
'''5 points:'''<br />
<br />
*Montreal<br />
*Rome<br />
*Stockholm<br />
*Warsaw<br />
<br />
'''4 points:'''<br />
*Atlanta, Georgia<br />
*Barcelona<br />
*Berlin<br />
*Budapest<br />
*Copenhagen<br />
*Hamburg<br />
*Miami<br />
*Minneapolis<br />
*Munich<br />
<br />
===Evidence of world city formation===<br />
====Strong evidence====<br />
'''3 points'''<br />
*Dublin<br />
*Luxembourg<br />
*Philadelphia<br />
*Vienna<br />
<br />
====Some evidence====<br />
'''2 points:'''<br />
<br />
*Bratislava<br />
*Bucharest<br />
*Cleveland, Ohio<br />
*Cologne<br />
*Kiev<br />
*Lima<br />
*Oslo<br />
*Rotterdam<br />
*Seattle<br />
*Stuttgart<br />
*The Hague<br />
*Vancouver<br />
<br />
====Minimal evidence====<br />
'''1 point:'''<br />
*Antwerp<br />
*Aarhus<br />
*Baltimore, Maryland<br />
*Calgary<br />
*Columbus, Ohio<br />
*Dresden<br />
*Genoa<br />
*Gothenburg<br />
*Kansas City, Missouri<br />
*Marseille<br />
*Richmond, Virginia<br />
*Tijuana<br />
*Utrecht<br />
<br />
==External links==<br />
*[http://www.christophchamp.com/cv/travels/ A Google Map of my travels] (''incomplete'')<br />
*[https://www.amcharts.com/visited_countries/ Generate "Visited Countries" Map]<br />
*[http://flightdiary.net/christophchamp FlightDiary]<br />
*[http://www.trekearth.com/ TrekEarth]<br />
*[http://www.linux.com/article.pl?sid=06/08/24/146210 Geotagging files with libferris and Google Earth]<br />
*[http://triptracker.net/profile/Christoph/ TripTracker - Christoph]<br />
*[http://triptracker.net/trip/1165/ TripTracker] &mdash; Monaco - Slovenia - Rotterdam - Berlin - Copenhagen<br />
*[http://www.world66.com/myworld66/visitedCountries visited countries - map gen]<br />
*[http://english.freemap.jp/world_paint/world_paint.html Printable world map]<br />
*[http://www.farecast.com/ Farecast] &mdash; find cheap flights<br />
*[http://www.roughguides.com/ Rough Guides] &mdash; travel information<br />
*[https://travelerscenturyclub.org/countries-and-territories The Travelers' Century Club]<br />
__NOTOC__<br />
[[Category:Personal]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Travel_Log&diff=8265Category:Travel Log2023-03-20T23:50:39Z<p>Christoph: /* Flights */</p>
<hr />
<div>This category will be my, as yet, unorganised '''Travel Log''' to many places around the world. (Note: The following is very much an ''incomplete'' travel log.)<br />
<br />
== Auto ==<br />
<br />
===Berlin trip (2006)===<br />
* Monaco &rarr; Milano &rarr; Ljubljana &rarr; Rotterdam &rarr; Berlin &rarr; Copenhagen &rarr; Monaco: April 2006<br />
: [http://triptracker.net/trip/1165/ TripTracker]<br />
: 1-Apr-2006 (14h20): Monaco &rarr; Milano<br />
: 2-Apr-2006 (23h30): Milano &rarr; Ljubljana<br />
: 3-Apr-2006 &ndash; 5-Apr-2006: Slovenia (Ljubljana, Novo Mesto, Kranj, Postojna, Jesenice, etc.)<br />
: 5-Apr-2006 (12h30): |&larr; Austria (Villach)<br />
: 5-Apr-2006 (15h15): |&larr; Germany<br />
: 5-Apr-2006 (19h15): Stuttgart<br />
: 5-Apr-2006 (20h20): Karlsruhe<br />
: 5-Apr-2006 (23h30): Köln<br />
: 5-Apr-2006 (00h10): |&larr; The Netherlands<br />
: 5-Apr-2006 (02h00): Rotterdam<br />
: 7-Apr-2006 (12h00): |&rarr; Rotterdam<br />
: 7-Apr-2006 (14h45): |&larr; Germany<br />
: 7-Apr-2006 (17h00): Hannover<br />
: 7-Apr-2006 (18h30): Magdeburg<br />
: 7-Apr-2006 (20h00): Berlin<br />
: 8-Apr-2006 (15h30): |&rarr; Berlin<br />
: 8-Apr-2006 (18h00): Rostock<br />
: 8-Apr-2006 (19h30): Ferry (|&rarr; Germany from Rostock Harb.)<br />
: 8-Apr-2006 (21h15): Ferry (|&larr; Denmark at Gedsen)<br />
: 8-Apr-2006 (23h20): København<br />
: 9-Apr-2006 (06h30): |&rarr; København<br />
: 9-Apr-2006 (09h00): Ferry (|&rarr; Denmark from Gedsen)<br />
: 9-Apr-2006 (11h00): Ferry (|&larr; Germany at Rostock Harb.)<br />
: 9-Apr-2006 (13h30): |&larr; Berlin<br />
: 9-Apr-2006 (14h00): |&rarr; Berlin<br />
: 9-Apr-2006 (15h50): Dresden<br />
:10-Apr-2006 (00h45): |&larr; Slovenia<br />
:10-Apr-2006 (01h40): Ljubljana<br />
:10-Apr-2006 (02h40): Postojna<br />
:10-Apr-2006 (13h15): |&larr; Italy<br />
:10-Apr-2006 (15h00): Padova<br />
:10-Apr-2006 (15h40): Verona<br />
:10-Apr-2006 (18h50): Genova<br />
:10-Apr-2006 (20h35): |&larr; France<br />
:10-Apr-2006 (20h45): |&larr; Monaco<br />
<br />
===Canada trip (2001)===<br />
''Note: The total trip covered 11,893 km (7,390 miles).''<br />
*Corvallis, OR &rarr; Boston, MA &rarr; Quebec &rarr; Ontario &rarr; Manitoba &rarr; Saskatchewan &rarr; Alberta &rarr; British Columbia &rarr; Corvallis, OR<br />
** 01-Sep-2001 (??h??): |&rarr; Corvallis, OR<br />
** 06-Sep-2001 (15h45): |&larr; Massachusetts<br />
** 13-Sep-2001 (13h15): |&rarr; Westborough, MA<br />
** 13-Sep-2001 (17h46): Augusta, ME<br />
** 13-Sep-2001 (18h15): |&larr; CANADA (into Quebec)<br />
** 14-Sep-2001 (02h06): Grande Allee Est., Quebec<br />
** 14-Sep-2001 (15h01): Cap-Madeleine, PQ<br />
** 15-Sep-2001 (17h44): Thunder Bay, ON<br />
** 14-Sep-2001 (17h45): |&larr; Ontario<br />
** 14-Sep-2001 (20h03): Cobden, ON<br />
** 15-Sep-2001 (12h02): Sudbury, ON<br />
** 15-Sep-2001 (10h25): Wawa, ON<br />
** 15-Sep-2001 (22h01): Kenora, ON<br />
** 15-Sep-2001 (10h37): |&larr; Manitoba<br />
** 16-Sep-2001 (10h53): Brandon, MB<br />
** 16-Sep-2001 (12h50): |&larr; Saskatchewan<br />
** 16-Sep-2001 (16h09): Herbert, SK<br />
** 16-Sep-2001 (18h06): |&larr; Alberta<br />
** 16-Sep-2001 (23h00): |&larr; British Columbia<br />
** 17-Sep-2001 (00h30): |&larr; USA (into Idaho)<br />
** 17-Sep-2001 (03h36): Coeur d'Alene, ID<br />
** 17-Sep-2001 (05h30): |&larr; Oregon<br />
<br />
===Ireland trip (1999-2000)===<br />
* 26-Dec-1999 (??h??): Dublin, Ireland<br />
* 26-Dec-1999 (16h13): Lord Edward St., Dublin<br />
* 27-Dec-1999 (??h??): Kinlay House, Christchurch, 2-12 Lord Edward St., Dublin, Ireland<br />
* 2?-Dec-1999 (??h??): Kilkenny<br />
* 28-Dec-1999 (12h27): Patrick St., Cork<br />
* 28-Dec-1999 (17h12): Mallow, Co. Cork<br />
* 29-Dec-1999 (??h??): Co. Kerry<br />
* ??-Dec-1999 (??h??): Saratoga House (Bed & Breakfast), Muckross Road, Killarney, Ireland<br />
* 29-Dec-1999 (15h09): Chapel St., Limerick<br />
* 29-Dec-1999 (15h18): Eimear<br />
* 30-Dec-1999 (??h??): Ballybofey<br />
* 30-Dec-1999 (15h51): Greysteel<br />
* 30-Dec-1999 (??h??): O'Connell St., Sligo<br />
* 30-Dec-1999 (??h??): Petra, Galway<br />
* 30-Dec-1999 (??h??): Sligo<br />
* 30-Dec-1999 (??h??): The Linen House Backpackers Hostel, 18-20 Kent Street, Belfast, Ireland<br />
* 01-Jan-2000 (14h46): Arthur Sq., Belfast<br />
* 02-Jan-2000 (06h34): Dublin Airport<br />
<br />
===Miscellaneous (Europe)===<br />
* Budapest, Hungary &rarr; Dubrovnik, Croatia: June/July 2018 (round-trip)<br />
* ''The Cliffs of Møn'', DK: Oct-2005<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Chiemsee, Germany: Oct-1996 (round-trip)<br />
* Zagreb, Croatia &rarr; Ljubjlana, Slovenia &rarr; Graz, Austria &rarr; Budapest, Hungary: Sep-1996<br />
* Zagreb, Croatia &rarr; Ljubljana, Slovenia: Sep-1996 (round-trip)<br />
* Budapest, Hungary &rarr; Zagreb, Croatia: Sep-1996<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Berchtesgaden, Germany &rarr; Innsbruck, Austria &rarr; Liechtenstein &rarr; Switzerland: Aug-1996 (round-trip)<br />
* Warsaw, Poland &rarr; Budapest, Hungary: September 1994<br />
* Budapest, Hungary &rarr; Slovakia (11-Nov-1993) &rarr; Warsaw, Poland: November 1993<br />
* Vienna, Austria &rarr; Budapest, Hungary: 28-Sep-1993<br />
<br />
===Miscellaneous (South America)===<br />
* Cuenca, Ecuador &rarr; Riobamba, Ecuador &rarr; Ambato, Ecuador &rarr; Quito, Ecuador: 1993 (round-trip)<br />
* Quito, Ecuador &#187; Ipiales, Colombia: 1993 (round-trip)<br />
* Guayaquil, Ecuador &rarr; Santo Domingo de Los Colorados, Ecuador &rarr; Quito, Ecuador: 1993<br />
* Guayaquil, Ecuador &rarr; Salinas, Ecuador: 1993 (round-trip)<br />
* Tumbes, Peru &rarr; Guayaquil, Ecuador: 21-Dec-1992<br />
<br />
===Miscellaneous (North America)===<br />
* Seattle, WA &#187; Winthrop, WA &#187; Leavenworth, WA &#187; Issaquah, WA &#187; Seattle, WA: June 2022<br />
* Seattle, WA &#187; Winthrop, WA &#187; Tiger, WA &#187; Spokane, WA &#187; Seattle, WA: May 2022 (1,200 km/744 mi)<br />
* Seattle, WA &#187; Portland, OR &#187; Grants Pass, OR &#187; Crescent City, CA &#187; Redwood National Forest &#187; Newport, OR &#187; Astoria, OR &#187; Elma, WA &#187; Seattle, WA: November 2021 (1,881 km/1,169 mi)<br />
* Seattle, WA &#187; Mt Saint Helens &#187; Mt Adams &#187; Stonehenge Memorial &#187; Multnomah Falls &#187; Seattle, WA: September 2021 (914 km/568 mi)<br />
* Seattle, WA &#187; Walla Walla, OR &#187; Joseph, OR &#187; Lewiston, ID &#187; Grand Coulee, WA &#187; Seattle, WA: June 2021 (1,421 km/883 mi)<br />
* Seattle, WA &#187; Pendleton, OR &#187; Craters of the Moon National Monument & Preserve &#187; Idaho Springs, ID &#187; Jackson, WY &#187; Grand Teton National Park &#187; Yellowstone National Park &#187; Missoula, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: September 2020 (2,746 km/1,706 mi)<br />
* Seattle, WA &#187; Coeur d'Alene, ID &#187; Missoula, MT &#187; Glacier National Park, MT &#187; Seattle, WA: July 2019 (1,984 km/1,233 mi)<br />
* Seattle, WA &#187; Corvallis, OR: November 2018 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2017 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2016 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2015 (round-trip)<br />
* Texas &#187; Oklahoma &#187; Kansas &#187; Nebraska &#187; South Dakota &#187; Wyoming &#187; Montana &#187; Idaho &#187; Seattle, WA: September 2015 (4,000 km/4,290 mi)<br />
* Seattle, WA &#187; Oregon &#187; Idaho &#187; Utah &#187; Wyoming &#187; Colorado &#187; Kansas &#187; Oklahoma &#187; Texas: 11-16 May 2013<br />
* Seattle, WA &#187; Port Angeles, WA &#187; Hurricane Ridge, WA: 28-Dec-2012 (round-trip)<br />
* Seattle, WA &#187; Portland, OR: 4-Dec-2012 (round-trip)<br />
* Chicago, IL &#187; Milwaukee, WI &#187; Minneapolis, MN &#187; Fargo, ND &#187; Billings, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: 25-26 June 2012 (3,357 km/2,086 mi)<br />
* St. Louis, MO &#187; Chicago, IL: 31-Dec-2011<br />
* Chicago, IL &#187; St. Louis, MO: 5-Jul-2011<br />
* Milwaukee, WI &#187; Chicago, IL: 30-Jun-2011<br />
* Pittsburgh, PA &#187; New York City, NY: April 2005 (round-trip)<br />
* Pittsburgh, PA &#187; Bethlehem, PA &#187; Westborough, MA &#187; New York City, NY: December 2004 (round-trip)<br />
* Pittsburgh, PA &#187; Boston, MA: November 2004 (round-trip)<br />
* Corvallis, OR &#187; Salt Lake City, UT &#187; Houston, TX &#187; Atlanta, GA &#187; Pittsburgh, PA: September 2004<br />
* Corvallis, OR &#187; Boston, MA: 2001, 2002 (round-trip)<br />
* Corvallis, OR &#187; Vancouver, BC, Canada (round-trip)<br />
* Corvallis, OR &#187; Tijuana, Mexico: 7-Sep-1999 (round-trip)<br />
* Los Angeles, CA &#187; Corvallis, OR: January 1998<br />
* Houston, TX &#187; Milwaukee, WI &#187; Menominee, MI: May 1995 (round-trip)<br />
<br />
== Bus / Train / Ferry ==<br />
===Spain trip (2006)===<br />
* Monaco &#187; Cannes &#187; Marseille &#187; Montpellier St-Ro &#187; Barcelona; April 2006 (round-trip)<br />
** 24-Apr-06 18h35: |&rarr; Nice, France [SNCF train]<br />
** 24-Apr-06 19h00: Antibes, FR<br />
** 24-Apr-06 19h07: Cannes, FR<br />
** 24-Apr-06 19h30: B. sur-Mer, FR<br />
** 24-Apr-06 19h39: San Raphael-Valescure, FR<br />
** 24-Apr-06 20h14: Les Arcs-Drag., FR<br />
** 24-Apr-06 20h56: Toulon, FR<br />
** 24-Apr-06 21h35: Marseille, FR<br />
** 25-Apr-06 15h05: |&rarr; Marseille, FR<br />
** 25-Apr-06 16h16: Nîmes, FR<br />
** 25-Apr-06 17h21: Montpellier St-Ro, FR<br />
** 25-Apr-06 18h42: Béziers, FR<br />
** 25-Apr-06 19h35: Perpignan, FR<br />
** 25-Apr-06 20h15: Portbou, Spain (ES) [''border'']<br />
** 25-Apr-06 22h30: Barcelona, ES<br />
** 27-Apr-06 19h24: |&rarr; Barcelona, ES [Renfe train]<br />
** 27-Apr-06 22h05: Cerbere, FR [''border'']<br />
** 28-Apr-06 08h37: Nice, FR<br />
** 28-Apr-06 10h00: Monaco<br />
<br />
===Miscellaneous (Europe)===<br />
* Tallinn, Estonia &rarr; Helsinki, Finland: January 2020 (round-trip)<br />
* Lisbon, Portugal &rarr; Porto, Portugal: Nov-2016 (round-trip)<br />
* København, DK &#187; Berlin, D: 09-Apr-2006 [+Ferry]<br />
* Berlin, D &#187; København, DK: 08-Apr-2006 (15h15) [+Ferry]<br />
* Ljubljana, Slovenia &#187; Villach HBF, Austria: 18-Aug-1997<br />
* Stockholm C &#187; Oslo S: 15-Aug-1997 (SJ train)<br />
* Salzburg, Austria &#187; Ljubljana, Slovenia: 25-Aug-1997 (&#214;sterreichische Bundesbahnen train (&#214;BB))<br />
* Haslev, DK &#187; Næstved, DK: 24-Aug-1997 (DSB train)<br />
* København &#187; Stockholm C: 14-Aug-1997 (DSB train)<br />
* Oslo S &#187; Bergen: 16-Aug-1997<br />
* Næstved, DK &#187; Rødby Færge, DK: 24-Aug-1997<br />
* Salzburg HBF &#187; Villach HBF (&uuml;ber Schwarzach-St. veit Bad Gastein): 25-Aug-1997 (&#214;BB train)<br />
* Oslo S &#187; Trondheim: 18-Aug-1997<br />
* Grensen (Scandinavia): 16-Aug-1997<br />
* Abisko Turiststation - STF: 20-Aug-1997<br />
* Abisko Turiststation - STF: 21-Aug-1997<br />
* Germany: 24-Aug-1997 (DB train)<br />
* Stockholm S:T Eriksgatan: 15-Aug-1997<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Jun-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Mar-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: (28-Nov-1997/30-Nov-1997) (round-trip)<br />
* Budapest, Hungary &rarr; Ljubljana, Slovenia: 8-Nov-1996<br />
* Budapest, Hungary &rarr; Slovakia: 18-Aug-1995 (round-trip)<br />
* Budapest, Hungary &rarr; Vienna, Austria: 9-Feb-1995 (round-trip)<br />
* Moscow, Russia &rarr; Warsaw, Poland: Sep-1994<br />
* Moscow, Russia &rarr; Brest, Belarus: Aug-1994 (round-trip)<br />
* Moscow, Russia &rarr; Minsk, Belarus: Jul-1994 (round-trip)<br />
* Warsaw, Poland &#187; Moscow, Russia: Jun-1994<br />
* Warsaw, Poland &rarr; Vilnius, Lithuania &rarr; Riga, Latvia: (12-Jan-1994/??-Jan-1994) (round-trip)<br />
<br />
===Miscellaneous (South America)===<br />
* Arequipa, Peru &rarr; Lima, Peru: 1992<br />
* Arequipa, Peru &rarr; Iquique, Chile: (17-Jul-1992/20-Jul-1992) (round-trip)<br />
* Lima, Peru &rarr; Arequipa, Peru: 1992<br />
* Lima, Peru &rarr; La Paz, Bolivia: (19-May-1991/6-Jun-1991) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (29-Nov-1990/11-Dec-1990) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (6-Jul-1990/20-Jul-1990) (round-trip)<br />
<br />
==Flights==<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX): March 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): February 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2022 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): August 2022 [RT]<br />
* Kyiv, Ukraine (KBP) ✈ Frankfurt, Germany (FRA) ✈ Seattle, WA (SEA): December 2021<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ Frankfurt, Germany (FRA) ✈ Kyiv, Ukraine (KBP): December 2021<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2021 [RT]<br />
* Memphis, TN (MEM) ✈ Atlanta, GA (ATL) ✈ Seattle, WA (SEA): June 2021<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC) ✈ Memphis, TN (MEM): June 2021<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): May 2021 [RT]<br />
* Tallinn, Estonia (TLL) ✈ Stockholm, Sweden (ARN) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): January 2020<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ København, DK (CPH) ✈ Helsinki, Finland (HEL) ✈ Tallinn, Estonia (TLL): December 2019<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): October 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Miami, FL (MIA): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): August 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Denver, CO (DEN): May 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Charlotte, NC (CLT): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Santa Ana, CA (SNA): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): September 2018 [RT]<br />
* Budapest, Hungary (BUD) ✈ Brussels, Belgium (BRU) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): July 2018<br />
* Seattle, WA (SEA) ✈ Toronto, Canada (YYZ) ✈ Budapest, Hungary (BUD): June 2018<br />
* Seattle, WA (SEA) ✈ Reno, NV (RNO): May 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Reykjavík, Iceland (RKV): December 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Kona, Hawaii (KOA): September 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC): August 2017 [RT]<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): November 2016<br />
* Lisbon, Portugal ✈ Amsterdam, NL (AMS): November 2016<br />
* Paris, FR (CGD) ✈ Lisbon, Portugal: November 2016<br />
* Seattle, WA (SEA) ✈ Paris, FR (CDG): November 2016<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): November 2016 [RT]<br />
* Seattle, WA (SEA) ✈ Las Vegas, NV (LAS): June 2016 [RT]<br />
* Houston, TX (IAH) ✈ Seattle, WA (SEA): September 2015 [RT]<br />
* Houston, TX (IAH) ✈ San Francisco, CA (SFO): August 2015 [RT]<br />
* Houston, TX (IAH) ✈ Madison, WI (MSN): March 2015 [RT]<br />
* Houston, TX (IAH) ✈ Amsterdam, NL (AMS): March 2015 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee (MKE): June 2011<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX) ✈ Chicago, IL (ORD): October 2010 [RT]<br />
* Seattle, WA (SEA) ✈ Los Angeles, CA (LAX): December 2007 [RT]<br />
* København, DK (CPH) ✈ Seattle, WA (SEA): June 2006<br />
* Heathrow, UK ✈ København, DK (CPH): June 2006<br />
* Nice, FR ✈ Heathrow, UK: June 2006<br />
* København, DK (CPH) ✈ Nice, FR (NCE): February 2006<br />
* Washington Dulles ✈ København, DK: August 2005<br />
* Pittsburgh, PA (PIT) ✈ Washington Dulles: August 2005<br />
* Portland, OR (PDX) ✈ Pittsburgh, PA (PIT): Summer 2004 [RT]<br />
* Eugene, OR ✈ Houston, TX (IAH): February 2002 [RT]<br />
* Portland, OR (PDX) ✈ Boston, MA: December 2002 [RT]<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): January 2000<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): January 2000<br />
* Dublin, Ireland ✈ Amsterdam, NL (AMS): January 2000<br />
* Amsterdam (AMS) ✈ Dublin, Ireland: December 1999<br />
* Seattle, WA (SEA) ✈ Amsterdam, NL (AMS): December 1999<br />
* Portland, OR (PDX) ✈ Seattle, WA (SEA): December 1999<br />
* Chicago (ORD) ✈ Los Angeles (LAX): December 1997<br />
* Greenbay, WI (GRB) ✈ Chicago (ORD): December 1997<br />
* Chicago (ORD) ✈ Greenbay, WI (GRB): December 1997<br />
* Rome, Italy (FCO) ✈ Chicago, IL (ORD): December 1997<br />
* Trieste, Italy (TRS) ✈ Rome, Italy (FCO): December 1997<br />
* Houston, TX (IAH) ✈ Budapest, Hungary (BUD): July 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: June 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: March 1996 [RT]<br />
* Narita, Japan ✈ Taipei, Taiwan: December 1995 [RT]<br />
* Los Angeles, CA (LAX) ✈ Narita, Japan: October 1995<br />
* Houston, TX (IAH) ✈ Los Angeles (LAX): October 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): September 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): May 1995 [RT]<br />
* Paris, FR (CGD) ✈ Vienna, Austria: September 1993<br />
* Quito, Ecuador ✈ Caracas, Venezuela (CCS) ✈ Paris, France: 1993<br />
* Lima, Peru ✈ Tumbes, Peru: December 1992<br />
* Boston, MA ✈ Miami, FL ✈ Lima, Peru: <br />
* Amsterdam, NL (AMS) ✈ Chicago, IL (ORD): <br />
* Boston, MA ✈ Amsterdam, NL (AMS):<br />
<br />
== Individual Places ==<br />
=== Ireland ===<br />
* Dublin<br />
** '''Dublin''' (Baile &Ntilde;tha Cliath)<br />
* Kildare<br />
** Naas<br />
* Laois<br />
* Carlow<br />
** Carlow (Ceatharlach)<br />
** Royal Oak<br />
* Kilkenny<br />
** '''Kilkenny''' (Cill Chainnigh)<br />
** Callan<br />
* Tipperary<br />
** Glenbower<br />
** Clonmel (Cluian Meala)<br />
** Cahir<br />
** Burncourt<br />
* Cork<br />
** Fermoy<br />
** '''Cork''' (Coroaigh)<br />
** Fota<br />
** Cobh (An C&oacute;bh)<br />
** '''Blarney'''<br />
** Macroom<br />
** Ballyvourney<br />
* Kerr<br />
** ''Derrynasaggart Mts''<br />
** Poulgorm Br<br />
** '''Killarney''' (Cill Airne)<br />
** Farranfore<br />
* Limerick<br />
** Abbeyfeale<br />
** ''Mullaghareirk Mts''<br />
** Newcastle West<br />
** Croagh<br />
** '''Limerick''' (Luimneach)<br />
* Clare<br />
** Bunratty<br />
** Ennis (Inis)<br />
** Ennistymon<br />
** Liscannor<br />
** ''Cliffs of Moher''<br />
** Doolin<br />
** Lisdoonvarna<br />
** Ballyvaughan<br />
** Bealaclugga<br />
** Burren<br />
* Galway<br />
** Kinvarra<br />
** Ballinderreen<br />
** Oranmore<br />
** '''Galway''' (Gaillimh)<br />
** Claregalway<br />
** Tuam<br />
* Mayo<br />
** Claremorris<br />
** Cloonfallagh<br />
** Charlestown<br />
* Sligo<br />
** Curry<br />
** Tubbercurry<br />
** Collooney<br />
** '''Sligo''' (Sligeach)<br />
** ''Dartry Mts''<br />
* Leitrim<br />
* Donegal<br />
** Bundoran<br />
** Ballyshannon<br />
** Donegal (D&uacute;n na nGall)<br />
** Ballybofey<br />
** Clady<br />
* Tyrone<br />
** '''Strabane''' (Northern Ireland)<br />
* Londonderry<br />
** Derry (Londonderry)<br />
** Eglinton<br />
** Ballykelly<br />
** Limavady<br />
** Coleraine<br />
* Antrim<br />
** Derrykelghan<br />
** Moss-side<br />
** Ballycastle<br />
** ''Antrim Hills''<br />
** Ballintoy<br />
** ''Carrick-a-Rede Rope Bridge''<br />
** ''Giants Causeway''<br />
** Craignamaddy<br />
** Ballymoney<br />
** Ballymena<br />
** Antrim<br />
** ''Lough Neagh'' (lake)<br />
** Dunadry<br />
** Newtownabbey<br />
** '''Belfast'''<br />
* Down<br />
** Lisburn<br />
** Banbridge<br />
* Armagh<br />
** Newry<br />
* Louth<br />
** Dundalk (Dun Dealgan)<br />
** Dunleen<br />
** Drogheda (Droichead Atha)<br />
* Meath<br />
** Julianstown<br />
* Dublin<br />
** Balbriggan<br />
** Swords<br />
<br />
[[Category:World Travels]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Docker&diff=8264Docker2023-03-03T21:40:17Z<p>Christoph: /* HEALTHCHECK */</p>
<hr />
<div>'''Docker''' is an open-source project that automates the deployment of applications inside software containers. Quote of features from docker web page:<br />
:Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.<ref>https://www.docker.com/what-docker</ref><br />
<br />
==Introduction==<br />
<br />
''Note: The following is based on content found on the official [https://www.docker.com/what-container Docker website], [[:wikipedia:Docker (software)|Wikipedia]], and various other locations.''<br />
<br />
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.<br />
<br />
; Lightweight : Docker containers running on a single machine share that machine's operating system kernel; they start instantly and use less compute and RAM. Images are constructed from filesystem layers and share common files. This minimizes disk usage and image downloads are much faster.<br />
; Standard : Docker containers are based on open standards and run on all major Linux distributions, Microsoft Windows, and on any infrastructure including VMs, bare-metal and in the cloud.<br />
; Secure : Docker containers isolate applications from one another and from the underlying infrastructure. Docker provides the strongest default isolation to limit app issues to a single container instead of the entire machine.<br />
<br />
As actions are done to a Docker base image, union file-system layers are created and documented, such that each layer fully describes how to recreate an action. This strategy enables Docker's lightweight images, as only layer updates need to be propagated (compared to full VMs, for example).<br />
<br />
Building on top of facilities provided by the Linux kernel (primarily cgroups and namespaces), a Docker container, unlike a virtual machine, does not require or include a separate operating system. Instead, it relies on the kernel's functionality and uses resource isolation for CPU and memory, and separate namespaces to isolate the application's view of the operating system. Docker accesses the Linux kernel's virtualization features directly using the <code>libcontainer</code> library (written in the Go programming language).<br />
<br />
===Comparing Containers and Virtual Machines===<br />
<br />
Containers and virtual machines have similar resource isolation and allocation benefits, but function differently because containers virtualize the operating system instead of hardware. Containers are more portable and efficient.<br />
<br />
; Virtual Machines : Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries and libraries - taking up tens of GBs. VMs can also be slow to boot.<br />
; Containers : Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. Containers take up less space than VMs (container images are typically tens of MBs in size), and start almost instantly.<br />
<br />
===Components===<br />
<br />
The Docker software as a service offering consists of three components:<br />
<br />
; Software : The Docker daemon, called "<code>dockerd</code>" is a persistent process that manages Docker containers and handles container objects. The daemon listens for API requests sent by the Docker Engine API. The Docker client, which identifies itself as "<code>docker</code>", allows users to interact with Docker through CLI. It uses the Docker REST API to communicate with one or more Docker daemons.<br />
; Objects : Docker objects refer to different entities used to assemble an application in Docker. The main Docker objects are images, containers, and services.<br />
:* A Docker container is a standardized, encapsulated environment that runs applications. A container is managed using the Docker API or CLI.<br />
:* A Docker image is a read-only template used to build containers. Images are used to store and ship applications.<br />
:* A Docker service allows containers to be scaled across multiple Docker daemons. The result is known as a "swarm", cooperating daemons that communicate through the Docker API.<br />
; Registries : A Docker registry is a repository for Docker images. Docker clients connect to registries to download ("pull") images for use or upload ("push") images that they have built. Registries can be public or private. Two main public registries are Docker Hub and Docker Cloud. Docker Hub is the default registry where Docker looks for images.<br />
<br />
==Docker commands==<br />
<br />
I will provide detailed examples on all of the following commands throughout this article.<br />
<br />
; Basics<br />
<br />
The following are the most common Docker commands (i.e., the ones you will most likely use the most day-to-day):<br />
<br />
* Show all running containers:<br />
$ docker ps<br />
* Show all containers (including stopped and failed ones):<br />
$ docker ps -a<br />
* Show all images in your local repository:<br />
$ docker images<br />
* Create an image based on the instructions in a <code>Dockerfile</code>:<br />
$ docker build<br />
* Start a container from an image (either from your local repository or from a remote repository {e.g., Docker Hub}):<br />
$ docker run<br />
* Remove/delete all ''stopped''/''failed'' containers (leaves running containers alone):<br />
$ docker rm $(docker ps -a -q)<br />
<br />
===Container commands===<br />
<br />
; Container lifecycle<br />
<br />
* Create a container but do not start it:<br />
$ docker create<br />
* Rename a container:<br />
$ docker rename<br />
* Create ''and'' start a container in one operation:<br />
$ docker run<br />
* Delete a container:<br />
$ docker rm<br />
* Update a container's resource limits:<br />
$ docker update<br />
<br />
; Starting and stopping containers<br />
<br />
* Start a container:<br />
$ docker start<br />
* Stop a running container:<br />
$ docker stop<br />
* Stop and start start a container:<br />
$ docker restart<br />
* Pause a running container ("freeze" it in place):<br />
$ docker pause<br />
* Un-pause a paused container:<br />
$ docker unpause<br />
* Attach/connect to a running container:<br />
$ docker attach<br />
* Block until running container stops (and print exit code):<br />
$ docker wait<br />
* Send <code>SIGKILL</code> to a running container:<br />
$ docker kill<br />
<br />
; Information<br />
<br />
* Show all ''running'' containers:<br />
$ docker ps<br />
* Get the logs for a given container:<br />
$ docker logs<br />
* Get all of the metadata about a container (e.g., IP address, etc.):<br />
$ docker inspect<br />
* Get real-time events from Docker Engine (e.g., start/stop containers, attach, create, etc.):<br />
$ docker events<br />
* Get the public-facing ports of a given container:<br />
$ docker port<br />
* Show running processes in a given container:<br />
$ docker top<br />
* Show a given container's resource usage statistics:<br />
$ docker stats<br />
* Show changed files in the container's filesystem (i.e., those changed from the original base image):<br />
$ docker diff<br />
<br />
; Miscellaneous<br />
<br />
* Get the environment variables for a given container:<br />
$ docker run ubuntu env<br />
* IP address of host machine:<br />
$ ip -4 -o addr show eth0<br />
2: eth0 inet 10.0.0.166/23<br />
* IP address of a container:<br />
$ docker run ubuntu ip -4 -o addr show eth0<br />
2: eth0 inet 172.17.0.2/16<br />
<br />
===Image commands===<br />
<br />
; Lifecycle<br />
* Show all images in your local repository:<br />
$ docker images<br />
* Create an image from a tarball:<br />
$ docker import<br />
* Create an image from a <code>Dockerfile</code><br />
$ docker build<br />
* Create an image from a container (note: it will pause the container, if it is running, during the commit process):<br />
$ docker commit<br />
* Remove/delete an image:<br />
$ docker rmi<br />
* Load an image from a tarball as STDIN (including images and tags):<br />
$ docker load<br />
* Save an image to a tarball (streamed to STDOUT with all parents lays, tags, and versions):<br />
$ docker save<br />
<br />
; Info<br />
<br />
* Show the history of an image:<br />
$ docker history<br />
* Tag an image:<br />
$ docker tag<br />
<br />
==Dockerfile directives==<br />
<br />
=== USER ===<br />
<pre><br />
$ cat << EOF > Dockerfile<br />
# Non-privileged user entry<br />
FROM centos:latest<br />
MAINTAINER xtof@example.com<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
USER xtof<br />
EOF<br />
</pre><br />
''Note: The use of <code>MAINTAINER</code> has been deprecated in newer versions of Docker. You should use <code>LABEL</code> instead, as it is much more flexible and its key/values show up in <code>docker inspect</code>. From here forward, I will only use <code>LABEL</code>.''<br />
<br />
$ docker build -t centos7/nonroot:v1 .<br />
$ docker exec -it <container_name> /bin/bash<br />
<br />
We are user "xtof" and are unable to become root. The workaround (i.e., how to become root) is like so:<br />
<br />
$ docker exec -u 0 -it <container_name> /bin/bash<br />
<br />
''NOTE: For the remainder of this section, I will omit the <code>$ cat << EOF > Dockerfile</code> part in the examples for brevity.''<br />
<br />
=== RUN ===<br />
<br />
Notes on the order of execution<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
USER xtof<br />
<br />
RUN echo "export PATH=/path/to/my/app:$PATH" >> /etc/bashrc<br />
</pre><br />
<br />
$ docker build -t centos7/config:v1 .<br />
...<br />
/bin/sh: /etc/bashrc: Permission denied<br />
<br />
The order of execution matters! Prior to the directive <code>USER xtof</code>, the user was root. After that directive, the user is now xtof, who does not have super-user privileges. Move the <code>RUN echo ...</code> directive to before the <code>USER xtof</code> directive for a successful build.<br />
<br />
=== ENV ===<br />
''Note: The following is a _terrible_ way of building a container. I am purposely doing it this way so I can show you a much better way later (see below).''<br />
<br />
* Build a CentOS 7 Docker image with Java 8 installed:<br />
<pre><br />
# SEE: https://gist.github.com/P7h/9741922 for various Java versions<br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN yum update -y<br />
RUN yum install -y net-tools wget<br />
<br />
RUN echo "SETTING UP JAVA"<br />
# The tarball method:<br />
#RUN cd ~ && wget --no-cookies --no-check-certificate \<br />
# --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \<br />
# "http://download.oracle.com/otn-pub/java/jdk/8u91-b14/jdk-8u91-linux-x64.tar.gz"<br />
#RUN tar xzvf jdk-8u91-linux-x64.tar.gz<br />
#RUN mv jdk1.8.0_91 /opt<br />
#ENV JAVA_HOME /opt/jdk1.8.0_91/<br />
<br />
# The rpm method:<br />
RUN cd ~ && wget --no-cookies --no-check-certificate \<br />
--header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \<br />
"http://download.oracle.com/otn-pub/java/jdk/8u161-b12/2f38c3b165be4555a1fa6e98c45e0808/jdk-8u161-linux-x64.rpm"<br />
RUN yum localinstall -y /root/jdk-8u161-linux-x64.rpm<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
USER xtof<br />
<br />
# User specific environment variable<br />
RUN cd ~ && echo "export JAVA_HOME=/usr/java/jdk1.8.0_161/jre" >> ~/.bashrc<br />
# Global (system-wide) environment variable<br />
ENV JAVA_BIN /usr/java/jdk1.8.0_161/jre/bin<br />
</pre><br />
<br />
$ docker build -t centos7/java8:v1 .<br />
<br />
=== CMD vs. RUN ===<br />
<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
CMD ["echo", "Hello from within my container"]<br />
</pre><br />
<br />
The <code>CMD</code> directive ''only'' executes when the container is started, whereas the <code>RUN</code> directive is executed during the build of the image.<br />
<br />
$ docker build -t centos7/echo:v1 .<br />
$ docker run centos7/echo:v1<br />
Hello from within my container<br />
<br />
The container starts, echos out that message, then exits.<br />
<br />
=== ENTRYPOINT ===<br />
<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
ENTRYPOINT "This command will display this message on EVERY container that is run from it"<br />
</pre><br />
<br />
$ docker build -t centos7/entry:v1 .<br />
$ docker run centos7/entry:v1<br />
This command will display this message on EVERY container that is run from it<br />
$ docker run centos7/entry:v1 /bin/echo "Can you see me?"<br />
This command will display this message on EVERY container that is run from it<br />
$ docker run centos7/echo:v1 /bin/echo "Can you see me?"<br />
Can you see me?<br />
<br />
Note the difference.<br />
<br />
=== EXPOSE ===<br />
<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN yum update -y<br />
RUN yum install -y httpd net-tools<br />
<br />
RUN echo "This is a custom index file built during the image creation" > /var/www/html/index.html<br />
<br />
ENTRYPOINT apachectl -DFOREGROUND # BAD WAY TO DO THIS!<br />
</pre><br />
<br />
$ docker build -t centos7/apache:v1 .<br />
$ docker run -d --name webserver centos7/apache:v1<br />
$ docker exec webserver /bin/cat /var/www/html/index.html<br />
This is a custom index file built during the image creation<br />
$ docker inspect webserver -f '<nowiki>{{.NetworkSettings.IPAddress}}</nowiki>' # => 172.17.0.6<br />
#~OR~<br />
$ docker inspect webserver | jq -crM '.[] | .NetworkSettings.IPAddress' # => 172.17.0.6<br />
$ curl 172.17.0.6<br />
This is a custom index file built during the image creation<br />
$ curl -sI 172.17.0.6 | awk '/^HTTP|^Server/{print}'<br />
HTTP/1.1 200 OK<br />
Server: Apache/2.4.6 (CentOS)<br />
$ time docker stop webserver<br />
real 0m10.275s # <- notice how long it took to stop the container<br />
user 0m0.008s<br />
sys 0m0.000s<br />
$ docker rm webserver<br />
<br />
It took ~10 seconds to stop the above container. This is because of the way we are (incorrectly) using <code>ENTRYPOINT</code>. The <code>SIGTERM</code> signal when running <code>`docker stop webserver`</code> actually timed out instead of exiting gracefully. A much better method is shown below, which ''will'' exit gracefully and in less than 300 ms.<br />
<br />
* Expose ports from the CLI<br />
$ docker run -d --name webserver -p 8080:80 centos7/apache:v1<br />
$ curl localhost:8080<br />
This is a custom index file built during the image creation<br />
$ docker stop webserver && docker rm webserver<br />
<br />
* Explicitly expose a port in the Docker image:<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN yum update -y && \<br />
yum install -y httpd net-tools && \<br />
yum autoremove -y && \<br />
echo "This is a custom index file built during the image creation" > /var/www/html/index.html<br />
<br />
EXPOSE 80<br />
<br />
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]<br />
</pre><br />
<br />
$ docker build -t centos7/apache:v1 .<br />
$ docker run -d --rm --name webserver -P centos7/apache:v1<br />
$ docker container ls --format '<nowiki>{{.Names}} {{.Ports}}</nowiki>'<br />
webserver 0.0.0.0:32769->80/tcp<br />
#~OR~<br />
$ docker port webserver | cut -d: -f2<br />
32769<br />
#~OR~<br />
$ docker inspect webserver | jq -crM '[.[] | .NetworkSettings.Ports."80/tcp"[] | .HostPort] | .[]'<br />
32769<br />
$ curl localhost:32769<br />
This is a custom index file built during the image creation<br />
$ time docker stop webserver<br />
real 0m0.283s<br />
user 0m0.004s<br />
sys 0m0.008s<br />
<br />
Note that I passed <code>--rm</code> to the <code>`docker run`</code> command so that the container will be removed when I stop the container. Also, note how much faster the container stopped (~300ms vs. 10 seconds above).<br />
<br />
=== HEALTHCHECK ===<br />
<br />
The <code>HEALTHCHECK</code> instruction tells Docker how to test a container to ensure it is still working. This can detect cases such as a web server that is stuck in an infinite loop and unable to handle new connections, even though the server process is still running.<br />
<br />
For example, we have a <code>Dockerfile</code> to define a simple webapp:<br />
<pre><br />
$ cat << EOF > Dockerfile<br />
FROM nginx:1.13.1<br />
<br />
RUN apt-get update \<br />
&& apt-get install -y curl \<br />
&& rm -rf /var/lib/apt/lists/*<br />
<br />
HEALTHCHECK --interval=15s --timeout=3s \<br />
CMD curl -fs http://localhost:80/ || exit 1<br />
EOF<br />
</pre><br />
This will check every five minutes or so that a web server is able to serve the site's main page within three seconds. The command's exit status indicates the health status of the container.<br />
<br />
The possible values are:<br />
<br />
* <code>0: success</code> - the container is healthy and ready for use<br />
* <code>1: unhealthy</code> - the container is not working correctly<br />
* <code>2: reserved</code> - do not use this exit code<br />
<br />
Then use Docker to build an image:<br />
<pre><br />
$ docker build -t healthcheck:v1 .<br />
</pre><br />
<br />
And run a container using this image:<br />
<pre><br />
$ docker run -d --name healthcheck-demo -p 80:80 healthcheck:v1<br />
</pre><br />
<br />
Then, check the status of the container:<br />
<pre><br />
$ docker ps<br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
9d8cf6d698c7 healthcheck:v1 "nginx -g 'daemon ..." 3 seconds ago Up 2 seconds (health: starting) 0.0.0.0:80->80/tcp healthcheck-demo<br />
</pre><br />
<br />
At the beginning, the status of container is <code>(health: starting)</code>; after a while, it changes to be healthy:<br />
<pre><br />
$ docker ps <br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
9d8cf6d698c7 healthcheck:v1 "nginx -g 'daemon ..." 2 minutes ago Up 5 minutes (healthy) 0.0.0.0:80->80/tcp healthcheck-demo<br />
</pre><br />
<br />
It takes retries/consecutive failures of the health check for the container to be considered unhealthy.<br />
<br />
You can use your own script to replace the command <code>curl -fs <nowiki>http://localhost:80/</nowiki> || exit 1</code>.<br />
<br />
What is more, STDOUT and STDERR of your script can be fetched from docker inspect command:<br />
<pre><br />
$ docker inspect --format '{{json .State.Health}}' healthcheck-demo | python -m json.tool<br />
{<br />
"FailingStreak": 0,<br />
"Log": [<br />
{<br />
"End": "2023-03-02T19:39:58.379906565+08:00",<br />
"ExitCode": 0,<br />
"Output": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 612 100 612 0 0 97297 0 --:--:-- --:--:-- --:--:-- 99k\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n",<br />
"Start": "2023-03-02T19:39:58.229550952+08:00"<br />
}<br />
],<br />
"Status": "healthy"<br />
}<br />
</pre><br />
<br />
Note that you can filter containers by health status with:<br />
<pre><br />
$ docker ps -a --filter=health=unhealthy<br />
</pre><br />
<br />
Or, even:<br />
<pre><br />
$ docker ps -a -f status=dead<br />
</pre><br />
<br />
==Container volume management==<br />
<br />
$ docker run -it --name voltest -v /mydata centos:latest /bin/bash<br />
[root@bffdcb88c485 /]# df -h<br />
Filesystem Size Used Avail Use% Mounted on<br />
none 213G 173G 30G 86% /<br />
tmpfs 7.8G 0 7.8G 0% /dev<br />
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup<br />
/dev/mapper/ubuntu--vg-root 213G 173G 30G 86% /mydata<br />
shm 64M 0 64M 0% /dev/shm<br />
tmpfs 7.8G 0 7.8G 0% /sys/firmware<br />
[root@bffdcb88c485 /]# echo "testing" >/mydata/mytext.txt<br />
$ docker inspect voltest | jq -crM '.[] | .Mounts[].Source'<br />
/var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data<br />
$ sudo cat /var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data/mytext.txt<br />
testing<br />
$ sudo /bin/bash -c \<br />
"echo 'this is from the host OS' >/var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data/host.txt"<br />
[root@bffdcb88c485 /]# cat /mydata/host.txt <br />
this is from the host OS<br />
<br />
* Cleanup<br />
$ docker rm voltest<br />
$ docker volume rm 2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b<br />
<br />
* Mount host's current working directory inside container:<br />
$ echo "my config" >my.conf<br />
$ echo "my message" >message.txt<br />
$ echo "aerwr3adf" >app.bin<br />
$ chmod +x app.bin<br />
$ docker run -it --name voltest -v ${PWD}:/mydata centos:latest /bin/bash<br />
[root@f5f34ccb54fb /]# ls -l /mydata/<br />
total 24<br />
-rwxrwxr-x 1 1000 1000 10 Mar 8 19:29 app.bin<br />
-rw-rw-r-- 1 1000 1000 11 Mar 8 19:29 message.txt<br />
-rw-rw-r-- 1 1000 1000 10 Mar 8 19:29 my.conf<br />
[root@f5f34ccb54fb /]# touch /mydata/foobar<br />
$ ls -l ${PWD}<br />
total 24<br />
-rwxrwxr-x 1 xtof xtof 10 Mar 8 11:29 app.bin<br />
-rw-r--r-- 1 root root 0 Mar 8 11:36 foobar<br />
-rw-rw-r-- 1 xtof xtof 11 Mar 8 11:29 message.txt<br />
-rw-rw-r-- 1 xtof xtof 10 Mar 8 11:29 my.conf<br />
$ docker rm voltest<br />
<br />
==Images==<br />
<br />
===Saving and loading images===<br />
<br />
$ docker pull centos:latest<br />
$ docker run -it centos:latest /bin/bash<br />
[root@29fad368048c /]# yum update -y<br />
[root@29fad368048c /]# echo xtof >/root/built_by.txt<br />
$ docker commit reverent_elion centos:xtof<br />
$ docker rm reverent_elion<br />
$ docker images<br />
REPOSITORY TAG IMAGE ID CREATED SIZE<br />
centos xtof e0c8bd35ba50 3 seconds ago 463MB<br />
centos latest 980e0e4c79ec 1 minute ago 197MB<br />
$ docker history centos:xtof<br />
IMAGE CREATED CREATED BY SIZE<br />
e0c8bd35ba50 27 seconds ago /bin/bash 266MB <br />
980e0e4c79ec 18 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B <br />
<missing> 18 months ago /bin/sh -c #(nop) LABEL name=CentOS Base ... 0B <br />
<missing> 18 months ago /bin/sh -c #(nop) ADD file:e336b45186086f7... 197MB <br />
<missing> 18 months ago /bin/sh -c #(nop) MAINTAINER <nowiki>https://gith...</nowiki> 0B<br />
<br />
* Save the original <code>centos:latest</code> image we pulled from Docker Hub:<br />
$ docker save --output centos-latest.tar centos:latest<br />
<br />
Note that the above command essentially tars up the contents of the image found in <code>/var/lib/docker/image</code> directory.<br />
<br />
$ tar tvf centos-latest.tar <br />
-rw-r--r-- 0/0 2309 2016-09-06 14:10 980e0e4c79ec933406e467a296ce3b86685e6b42eed2f873745e6a91d718e37a.json<br />
drwxr-xr-x 0/0 0 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/<br />
-rw-r--r-- 0/0 3 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/VERSION<br />
-rw-r--r-- 0/0 1391 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/json<br />
-rw-r--r-- 0/0 204305920 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/layer.tar<br />
-rw-r--r-- 0/0 202 1969-12-31 16:00 manifest.json<br />
-rw-r--r-- 0/0 89 1969-12-31 16:00 repositories<br />
<br />
* Save space by compressing the tar file:<br />
$ gzip centos-latest.tar # .tar -> 195M; .tar.gz -> 68M<br />
<br />
* Delete the original <code>centos:latest</code> image:<br />
$ docker rmi centos:latest<br />
<br />
* Restore (or load) the image back to our local repository:<br />
$ docker load --input centos-latest.tar.gz<br />
<br />
===Tagging images===<br />
<br />
* List our current images:<br />
$ docker images<br />
REPOSITORY TAG IMAGE ID CREATED SIZE<br />
centos xtof e0c8bd35ba50 About an hour ago 463MB<br />
<br />
* Tag the above image:<br />
$ docker tag e0c8bd35ba50 xtof/centos:v1<br />
$ docker images<br />
REPOSITORY TAG IMAGE ID CREATED SIZE<br />
centos xtof e0c8bd35ba50 About an hour ago 463MB<br />
xtof/centos v1 e0c8bd35ba50 About an hour ago 463MB<br />
<br />
Note that we did not create a new image, we just created a new tag of the same/original <code>centos:xtof</code> image.<br />
<br />
Note: The maximum number of characters in a tag is 128.<br />
<br />
==Docker networking==<br />
<br />
===Default networks===<br />
$ ip addr show docker0<br />
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default <br />
link/ether 02:42:c0:75:70:13 brd ff:ff:ff:ff:ff:ff<br />
inet 172.17.0.1/16 scope global docker0<br />
valid_lft forever preferred_lft forever<br />
inet6 fe80::42:c0ff:fe75:7013/64 scope link <br />
valid_lft forever preferred_lft forever<br />
#~OR~<br />
$ ifconfig docker0<br />
docker0 Link encap:Ethernet HWaddr 02:42:c0:75:70:13 <br />
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0<br />
inet6 addr: fe80::42:c0ff:fe75:7013/64 Scope:Link<br />
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1<br />
RX packets:420654 errors:0 dropped:0 overruns:0 frame:0<br />
TX packets:1162975 errors:0 dropped:0 overruns:0 carrier:0<br />
collisions:0 txqueuelen:0 <br />
RX bytes:85851647 (85.8 MB) TX bytes:1196235716 (1.1 GB)<br />
<br />
$ docker network inspect bridge | jq '.[] | .IPAM.Config[].Subnet'<br />
"172.17.0.0/16"<br />
So, the usable range of IP addresses in our 172.17.0.0/16 subnet is: 172.17.0.1 - 172.17.255.254<br />
<br />
$ docker network ls<br />
NETWORK ID NAME DRIVER SCOPE<br />
bf831059febc bridge bridge local<br />
266f6df5c44e host host local<br />
ce79e4043a20 none null local<br />
$ docker ps -q | wc -l<br />
#~OR~<br />
$ docker container ls --format '<nowiki>{{.Names}}</nowiki>' | wc -l<br />
4 # => 4 running containers<br />
$ docker network inspect bridge | jq '.[] | .Containers[].IPv4Address'<br />
"172.17.0.2/16"<br />
"172.17.0.5/16"<br />
"172.17.0.4/16"<br />
"172.17.0.3/16"<br />
The output from the last command are the IP addresses of the 4 containers currently running on my host.<br />
<br />
===Custom networks===<br />
* Create a Docker network<br />
$ man docker-network-create # for details<br />
$ docker network create --subnet 10.1.0.0/16 --gateway 10.1.0.1 --ip-range=10.1.4.0/24 \<br />
--driver=bridge --label=host4network br04<br />
<br />
* Use the above network with a given container:<br />
$ docker run -it --name net-test --net br04 centos:latest /bin/bash<br />
<br />
* Assign a static IP to a given container in the above (user created) network:<br />
$ docker run -it --name net-test --net br04 --ip 10.1.4.100 centos:latest /bin/bash<br />
<br />
Note: You can ''only'' assign static IPs to user created networks (i.e., you ''cannot'' assign them to the default "bridge" network).<br />
<br />
==Monitoring==<br />
<br />
$ docker top <container_name><br />
$ docker stats <container_name><br />
<br />
===Logs===<br />
<br />
* Fetch logs of a given container:<br />
$ docker logs <container_name><br />
<br />
* Fetch logs of a given container prefixed with timestamps (UTC format by default):<br />
$ docker logs --timestamps <container_name><br />
<br />
===Events===<br />
$ docker events<br />
$ docker events --since '1h'<br />
$ docker events --since '2018-03-08T16:00'<br />
$ docker events --filter event=attach<br />
$ docker events --filter event=destroy<br />
$ docker events --filter event=attach --filter event=die --filter event=stop<br />
<br />
==Cleanup==<br />
<br />
* Check local system disk usage:<br />
<pre><br />
$ docker system df<br />
TYPE TOTAL ACTIVE SIZE RECLAIMABLE<br />
Images 53 3 16.52GB 15.9GB (96%)<br />
Containers 3 1 438.9MB 0B (0%)<br />
Local Volumes 16 2 2.757GB 2.628GB (95%)<br />
Build Cache 0 0 0B 0B<br />
</pre><br />
<br />
Note: Use <code>docker system df --verbose</code> to get even more details.<br />
<br />
* Delete all stopped containers at once and reclaim the disk space they are using:<br />
$ docker container prune<br />
<br />
* Remove all containers (both the running ones and the stopped ones):<br />
<pre><br />
# Old method:<br />
$ docker rm -f $(docker ps -aq)<br />
# New method:<br />
$ docker container rm -f $(docker container ls -aq)<br />
</pre><br />
Note: It is often useful to use the <code>--rm</code> flag when running a container so that it is automatically removed when its PID 1 process is stopped, thus releasing unused disk immediately.<br />
<br />
* Cleanup everything all at one ('''CAREFUL!'''):<br />
<pre><br />
$ docker system prune<br />
WARNING! This will remove:<br />
- all stopped containers<br />
- all networks not used by at least one container<br />
- all dangling images<br />
- all dangling build cache<br />
Are you sure you want to continue? [y/N]<br />
</pre><br />
<br />
==Examples==<br />
<br />
===Simple Nginx server===<br />
<br />
* Create an index.html file:<br />
<pre><br />
$ mkdir html<br />
$ cat << EOF >html/index.html<br />
Hello from Docker<br />
EOF<br />
</pre><br />
<br />
* Create a Dockerfile:<br />
<pre><br />
FROM nginx<br />
COPY html /usr/share/nginx/html<br />
</pre><br />
<br />
* Build the image:<br />
$ docker build -t test-nginx .<br />
<br />
* Start up container, using image built above:<br />
$ docker run --name check-nginx -d -p 8080:80 test-nginx<br />
<br />
* Check that it works:<br />
$ curl <nowiki>http://localhost:8080</nowiki><br />
Hello from Docker<br />
<br />
===Connecting two containers===<br />
<br />
In this example, we will start up a Postgres container and then start up another container and make a connection to the original Postgres container:<br />
<br />
$ docker pull postgres<br />
$ docker run --name test-postgres -e POSTGRES_PASSWORD=mypassword -d postgres<br />
$ docker run -it --rm --link test-postgres:postgres postgres psql -h postgres -U postgres<br />
<pre><br />
Password for user postgres:<br />
psql (11.0 (Debian 11.0-1.pgdg90+2))<br />
Type "help" for help.<br />
<br />
postgres=# SELECT 1;<br />
?column?<br />
----------<br />
1<br />
(1 row)<br />
<br />
postgres=# \q<br />
</pre><br />
<br />
Connection was successful!<br />
<br />
===Support for various hardware platforms===<br />
<br />
NOTE: If your image is being created on an M1 chip (ARM64) but you want to execute the container on an AMD64 chip, then use <code>FROM - platform=linux/amd64</code> on your Docker image so it can be shipped anywhere. For example:<br />
<pre><br />
FROM node:current-alpine3.15<br />
#FROM - platform=linux/amd64 node:current-alpine3.15<br />
WORKDIR /app<br />
ADD . /app<br />
RUN npm install<br />
#RUN npm install express<br />
EXPOSE 3000<br />
CMD ["npm", "start"]<br />
</pre><br />
<br />
==Docker compose==<br />
<br />
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the [https://docs.docker.com/compose/overview/#features list of features].<br />
<br />
Using Compose is basically a three-step process:<br />
# Define your app's environment with a <code>Dockerfile</code>. so it can be reproduced anywhere.<br />
# Define the services that make up your app in <code>docker-compose.yml</code> so they can be run together in an isolated environment.<br />
# Run <code>docker-compose up</code> and Compose starts and runs your entire app.<br />
<br />
===Basic example===<br />
<br />
''Note: This is based off of [https://docs.docker.com/compose/gettingstarted/ this article].''<br />
<br />
In this basic example, we will build a simple Python web application running on Docker Compose. The application uses the Flask framework and maintains a hit counter in Redis.<br />
<br />
''Note: This section assumes you already have Docker Engine and [https://docs.docker.com/compose/install/#install-compose Docker Compose] installed.''<br />
<br />
* Create a directory for the project:<br />
$ mkdir compose-test && cd $_<br />
<br />
* Create a file called <code>app.py</code> in your project directory and paste this in:<br />
<pre><br />
import time<br />
import redis<br />
from flask import Flask<br />
<br />
<br />
app = Flask(__name__)<br />
cache = redis.Redis(host='redis', port=6379)<br />
<br />
<br />
def get_hit_count():<br />
retries = 5<br />
while True:<br />
try:<br />
return cache.incr('hits')<br />
except redis.exceptions.ConnectionError as exc:<br />
if retries == 0:<br />
raise exc<br />
retries -= 1<br />
time.sleep(0.5)<br />
<br />
<br />
@app.route('/')<br />
def hello():<br />
count = get_hit_count()<br />
return 'Hello World! I have been seen {} times.\n'.format(count)<br />
<br />
if __name__ == "__main__":<br />
app.run(host="0.0.0.0", debug=True)<br />
</pre><br />
<br />
In this example, <code>redis</code> is the hostname of the redis container on the application's network. We use the default port for Redis: <code>6379</code>.<br />
<br />
* Create another file called <code>requirements.txt</code> in your project directory and paste this in:<br />
flask<br />
redis<br />
<br />
* Create a Dockerfile<br />
*: This Dockerfile will be used to build an image that contains all the dependencies the Python application requires, including Python itself.<br />
<pre><br />
FROM python:3.4-alpine<br />
ADD . /code<br />
WORKDIR /code<br />
RUN pip install -r requirements.txt<br />
CMD ["python", "app.py"]<br />
</pre><br />
<br />
* Create a file called <code>docker-compose.yml</code> in your project directory and paste the following:<br />
<pre><br />
version: '3'<br />
services:<br />
web:<br />
build: .<br />
ports:<br />
- "5000:5000"<br />
redis:<br />
image: "redis:alpine"<br />
</pre><br />
<br />
* Build and run this app with Docker Compose:<br />
$ docker-compose up<br />
<br />
Compose pulls a Redis image, builds an image for your code, and starts the services you defined. In this case, the code is statically copied into the image at build time.<br />
<br />
* Test the application:<br />
$ curl localhost:5000<br />
Hello World! I have been seen 1 times.<br />
<br />
$ for i in $(seq 1 10); do curl -s localhost:5000; done<br />
Hello World! I have been seen 2 times.<br />
Hello World! I have been seen 3 times.<br />
Hello World! I have been seen 4 times.<br />
Hello World! I have been seen 5 times.<br />
Hello World! I have been seen 6 times.<br />
Hello World! I have been seen 7 times.<br />
Hello World! I have been seen 8 times.<br />
Hello World! I have been seen 9 times.<br />
Hello World! I have been seen 10 times.<br />
Hello World! I have been seen 11 times.<br />
<br />
* List containers:<br />
<pre><br />
$ docker-compose ps<br />
Name Command State Ports <br />
-------------------------------------------------------------------------------------<br />
compose-test_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp <br />
compose-test_web_1 python app.py Up 0.0.0.0:5000->5000/tcp<br />
</pre><br />
<br />
* Display the running processes:<br />
<pre><br />
$ docker-compose top<br />
compose-test_redis_1<br />
UID PID PPID C STIME TTY TIME CMD <br />
--------------------------------------------------------------------<br />
systemd+ 29401 29367 0 15:28 ? 00:00:00 redis-server <br />
<br />
compose-test_web_1<br />
UID PID PPID C STIME TTY TIME CMD <br />
--------------------------------------------------------------------------------<br />
root 29407 29373 0 15:28 ? 00:00:00 python app.py <br />
root 29545 29407 0 15:28 ? 00:00:00 /usr/local/bin/python app.py<br />
</pre><br />
<br />
* Shutdown app:<br />
$ Ctrl+C<br />
#~OR~<br />
$ docker-compose down<br />
<br />
==Install docker==<br />
<br />
===Debian-based distros===<br />
<br />
; Ubuntu 16.04 (Xenial Xerus)<br />
''Note: For this install, I will be using Ubuntu 16.04 LTS (Xenial Xerus). Docker requires a 64-bit version of Ubuntu as well as a kernel version equal to or greater than 3.10. My system satisfies both requirements.''<br />
<br />
* Setup the docker repo to install from:<br />
$ sudo apt-get update -y<br />
$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D<br />
$ echo "deb <nowiki>https://apt.dockerproject.org/repo ubuntu-xenial main</nowiki>" | sudo tee /etc/apt/sources.list.d/docker.list<br />
$ sudo apt-get update -y<br />
<br />
Make sure you are about to install from the Docker repo instead of the default Ubuntu 16.04 repo:<br />
<br />
$ apt-cache policy docker-engine<br />
<br />
The output of the above command show look something like the following:<br />
<pre><br />
docker-engine:<br />
Installed: (none)<br />
Candidate: 17.05.0~ce-0~ubuntu-xenial<br />
Version table:<br />
17.05.0~ce-0~ubuntu-xenial 500<br />
500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages<br />
17.04.0~ce-0~ubuntu-xenial 500<br />
500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages<br />
...<br />
</pre><br />
<br />
* Install docker:<br />
$ sudo apt-get install -y docker-engine<br />
<br />
; Ubuntu 18.04 (Bionic Beaver)<br />
<br />
$ sudo apt update<br />
$ sudo apt install -y apt-transport-https ca-certificates curl software-properties-common<br />
$ curl -fsSL <nowiki>https://download.docker.com/linux/ubuntu/gpg</nowiki> | sudo apt-key add -<br />
$ sudo add-apt-repository "deb [arch=amd64] <nowiki>https://download.docker.com/linux/ubuntu</nowiki> $(lsb_release -cs) stable"<br />
$ sudo apt update<br />
$ apt-cache policy docker-ce<br />
<pre><br />
docker-ce:<br />
Installed: (none)<br />
Candidate: 5:18.09.0~3-0~ubuntu-bionic<br />
Version table:<br />
5:18.09.0~3-0~ubuntu-bionic 500<br />
500 <nowiki>https://download.docker.com/linux/ubuntu</nowiki> bionic/stable amd64 Packages<br />
</pre><br />
<br />
$ sudo apt install docker-ce -y<br />
$ sudo systemctl status docker<br />
<pre><br />
● docker.service - Docker Application Container Engine<br />
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)<br />
Active: active (running) since Tue 2018-12-04 13:40:36 PST; 4s ago<br />
Docs: https://docs.docker.com<br />
Main PID: 6134 (dockerd)<br />
Tasks: 16<br />
CGroup: /system.slice/docker.service<br />
└─6134 /usr/bin/dockerd -H unix://<br />
</pre><br />
<br />
===Red Hat-based distros===<br />
''Note: For this install, I will be using CentOS 7 (release 7.2.1511). Docker requires a 64-bit version of CentOS as well as a kernel version equal to or greater than 3.10. My system satisfies both requirements.''<br />
<br />
* Install Docker (the fast way):<br />
$ sudo yum update -y<br />
$ curl -fsSL <nowiki>https://get.docker.com/</nowiki> | sh<br />
<br />
* Install Docker (via a yum repo):<br />
$ sudo yum update -y<br />
$ sudo pip install docker-py<br />
$ cat << EOF > /etc/yum.repos.d/docker.repo<br />
[dockerrepo]<br />
name=Docker Repository<br />
baseurl=<nowiki>https://yum.dockerproject.org/repo/main/centos/7/</nowiki><br />
enabled=1<br />
gpgcheck=1<br />
gpgkey=<nowiki>https://yum.dockerproject.org/gpg</nowiki><br />
EOF<br />
$ sudo rpm -vv --import <nowiki>https://yum.dockerproject.org/gpg</nowiki><br />
$ sudo yum update -y<br />
$ sudo yum install docker-engine -y<br />
<br />
===Post-installation steps===<br />
* Check on the status of docker:<br />
$ sudo systemctl status docker<br />
<pre><br />
● docker.service - Docker Application Container Engine<br />
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)<br />
Active: active (running) since Tue 2016-07-12 12:31:08 PDT; 6s ago<br />
Docs: https://docs.docker.com<br />
Main PID: 3392 (docker)<br />
CGroup: /system.slice/docker.service<br />
├─3392 /usr/bin/docker daemon -H fd://<br />
└─3411 docker-containerd -l /var/run/docker/libcontainerd/docker-containerd.sock --runtime docker-runc --start-timeout 2m<br />
</pre><br />
<br />
* Make sure the docker service automatically starts after a machine reboot:<br />
$ sudo systemctl enable docker<br />
<br />
* Execute docker without <code>`sudo`</code>:<br />
$ sudo usermod -aG docker $(whoami)<br />
#~OR~<br />
$ sudo usermod -aG docker $USER<br />
Log out and log back in to use docker without <code>`sudo`</code>.<br />
<br />
* Check version of Docker installed:<br />
<pre><br />
$ docker version<br />
Client:<br />
Version: 17.05.0-ce<br />
API version: 1.29<br />
Go version: go1.7.5<br />
Git commit: 89658be<br />
Built: Thu May 4 22:10:54 2017<br />
OS/Arch: linux/amd64<br />
<br />
Server:<br />
Version: 17.05.0-ce<br />
API version: 1.29 (minimum version 1.12)<br />
Go version: go1.7.5<br />
Git commit: 89658be<br />
Built: Thu May 4 22:10:54 2017<br />
OS/Arch: linux/amd64<br />
Experimental: false<br />
</pre><br />
<br />
* Check that docker has been successfully installed and configured:<br />
$ docker run hello-world<br />
<pre><br />
...<br />
This message shows that your installation appears to be working correctly.<br />
...<br />
</pre><br />
<br />
As the above message shows, you now have a successful install of Docker on your machine and are ready to start building images and creating containers.<br />
<br />
==Miscellaneous==<br />
<br />
* Get the hostname of the host the Docker Engine is running on:<br />
$ docker info -f '<nowiki>{{ .Name }}</nowiki>'<br />
<br />
* Get the number of stopped containers:<br />
$ docker info --format '<nowiki>{{json .}}</nowiki>' | jq '.ContainersStopped'<br />
3<br />
<br />
* Get the number of images in the local registry:<br />
$ docker info --format '<nowiki>{{json .}}</nowiki>' | jq '.Images'<br />
92<br />
<br />
* Verify the Docker service is running:<br />
<pre><br />
$ curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping<br />
OK<br />
</pre><br />
<br />
* Show docker disk usage<br />
<pre><br />
$ docker system df<br />
TYPE TOTAL ACTIVE SIZE RECLAIMABLE<br />
Images 84 11 25.01GB 20.44GB (81%)<br />
Containers 20 0 768.1MB 768.1MB (100%)<br />
Local Volumes 16 2 2.693GB 2.628GB (97%)<br />
Build Cache 0 0 0B 0B<br />
</pre><br />
<br />
* Just ''just'' the version of Docker installed:<br />
<pre><br />
$ docker version --format '{{.Server.Version}}'<br />
20.10.7<br />
$ docker version --format '{{.Server.Version}}' 2>/dev/null || docker -v | awk '{gsub(/,/, "", $3); print $3}'<br />
20.10.7<br />
</pre><br />
<br />
==Install your own Docker private registry==<br />
''Note: I will use CentOS 7 for this install and assume you already have docker and docker-compose installed (see above).''<br />
<br />
For this install, I will assume you have a domain name registered somewhere. I will use <code>docker.example.com</code> as my example domain. Replace anywhere you see that below with your actual domain name.<br />
<br />
* Install dependencies:<br />
$ yum install -y nginx # used for the registry endpoint<br />
$ yum install -y httpd-tools # for the htpasswd utility<br />
<br />
* Setup docker registry directory structure:<br />
$ mkdir -p /opt/docker-registry/{data,nginx{/conf.d,/certs},log}<br />
$ cd /opt/docker-registry<br />
<br />
* Create a docker-compose file:<br />
$ vim docker-compose.yml # and add the following:<br />
<br />
<pre><br />
nginx:<br />
image: "nginx:1.9"<br />
ports:<br />
- 5043:443<br />
links:<br />
- registry:registry<br />
volumes:<br />
- ./log/nginx/:/var/log/nginx:rw<br />
- ./nginx/conf.d:/etc/nginx/conf.d:ro<br />
- ./nginx/certs:/etc/nginx/certs:ro<br />
registry:<br />
image: registry:2<br />
ports:<br />
- 127.0.0.1:5000:5000<br />
environment:<br />
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data<br />
volumes:<br />
- ./data:/data<br />
</pre><br />
<br />
* Create an Nginx configuration file:<br />
$ vim /opt/docker-registry/nginx/conf.d/registry.conf # and add the following:<br />
<br />
<pre><br />
upstream docker-registry {<br />
server registry:5000;<br />
}<br />
<br />
server {<br />
listen 443;<br />
server_name docker.example.com;<br />
<br />
# SSL<br />
ssl on;<br />
ssl_certificate /etc/nginx/certs/docker.example.com.crt;<br />
ssl_certificate_key /etc/nginx/certs/docker.example.com.key;<br />
<br />
# disable any limits to avoid HTTP 413 for large image uploads<br />
client_max_body_size 0;<br />
<br />
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)<br />
chunked_transfer_encoding on;<br />
<br />
location /v2/ {<br />
# Do not allow connections from docker 1.5 and earlier<br />
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents<br />
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {<br />
return 404;<br />
}<br />
<br />
proxy_pass http://docker-registry;<br />
proxy_set_header Host $http_host; # required for docker client's sake<br />
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP<br />
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;<br />
proxy_set_header X-Forwarded-Proto $scheme;<br />
proxy_read_timeout 900;<br />
<br />
add_header 'Docker-Distribution-Api-Version:' 'registry/2.0' always;<br />
<br />
# To add basic authentication to v2 use auth_basic setting plus add_header<br />
auth_basic "Restricted access to Docker Registry";<br />
auth_basic_user_file /etc/nginx/conf.d/registry.htpasswd;<br />
}<br />
}<br />
</pre><br />
<br />
$ cd /opt/docker-registry/nginx/conf.d<br />
$ htpasswd -c registry.htpasswd <username> # replace <username> with your actual username<br />
$ htpasswd registry.htpasswd <username2> # [optional] add a 2nd user<br />
<br />
* Setup your own certificate signing authority (for use with SSL):<br />
<br />
$ cd /opt/docker-registry/nginx/certs<br />
<br />
* Generate a new root key:<br />
<br />
$ openssl genrsa -out docker-registry-CA.key 2048<br />
<br />
* Generate a root certificate (enter anything you like at the prompts):<br />
<br />
$ openssl req -x509 -new -nodes -key docker-registry-CA.key -days 3650 -out docker-registry-CA.crt<br />
<br />
Then generate a key for your server (this is the file referenced by <code>ssl_certificate_key</code> in the Nginx configuration above):<br />
<br />
$ openssl genrsa -out docker.example.com.key 2048<br />
<br />
Now we have to make a certificate signing request (CSR). After you type the following command, OpenSSL will prompt you to answer a few questions. Enter anything you like for the first few, however, when OpenSSL prompts you to enter the "Common Name", make sure to enter the domain or IP of your server.<br />
<br />
$ openssl req -new -key docker.example.com.key -out docker.example.com.csr<br />
<br />
* Sign the certificate request:<br />
<br />
$ openssl x509 -req -in docker.example.com.csr -CA docker-registry-CA.crt -CAkey docker-registry-CA.key -CAcreateserial -out docker.example.com.crt -days 3650<br />
<br />
* Force any clients that will use the certificate authority we created above to accept that it is a "legitimate" certificate. Run the following commands on the Docker registry server and on any hosts that will be communicating with the Docker registry server:<br />
<br />
$ sudo cp /opt/docker-registry/nginx/certs/docker-registry-CA.crt /usr/local/share/ca-certificates/<br />
$ sudo update-ca-trust<br />
<br />
* Restart the Docker daemon in order for it to pick up the changes to the certificate store:<br />
<br />
$ sudo systemctl restart docker.service<br />
<br />
* Bring up the associated Docker containers:<br />
$ docker-compose up -d<br />
<br />
* Your Docker registry directory structure should look like the following:<br />
<pre><br />
$ cd /opt/docker-registry && tree .<br />
.<br />
├── data<br />
├── docker-compose.yml<br />
├── log<br />
│ └── nginx<br />
│ ├── access.log<br />
│ └── error.log<br />
└── nginx<br />
├── certs<br />
│ ├── docker-registry-CA.crt<br />
│ ├── docker-registry-CA.key<br />
│ ├── docker-registry-CA.srl<br />
│ ├── docker.example.com.crt<br />
│ ├── docker.example.com.csr<br />
│ └── docker.example.com.key<br />
└── conf.d<br />
├── registry.conf<br />
└── registry.htpasswd<br />
</pre><br />
<br />
* To access the private Docker registry from a client machine (any machine, really), first add the SSL certificate you created earlier to the client machine:<br />
<br />
$ cat /opt/docker-registry/nginx/certs/docker-registry-CA.crt # copy contents<br />
# On client machine:<br />
$ sudo vim /usr/local/share/ca-certificates/docker-registry-CA.crt # paste contents<br />
$ sudo update-ca-certificates # You should see "1 added" in the output<br />
<br />
* Restart Docker on the client machine to make sure it reloads the system's CA certificates:<br />
<br />
$ sudo service docker restart<br />
<br />
* Test that you can reach your private Docker registry:<br />
$ curl -k <nowiki>https://USERNAME:PASSWORD@docker.example.com:5043/v2/</nowiki><br />
{} # <- proper output<br />
<br />
* Now, test that you can login with Docker:<br />
$ docker login <nowiki>https://docker.example.com:5043</nowiki><br />
<br />
If that returns with "Login Succeeded", your private Docker registry is up and running!<br />
<br />
'''This section is incomplete. It will be updated when I have time.'''<br />
<br />
==Docker environment variables==<br />
''Note: See [https://docs.docker.com/engine/reference/commandline/cli/#environment-variables here] for the most up-to-date list of environment variables.''<br />
<br />
The following list of environment variables are supported by the docker command line:<br />
<br />
;<code>DOCKER_API_VERSION</code> : The API version to use (e.g., 1.19)<br />
;<code>DOCKER_CONFIG</code> : The location of your client configuration files.<br />
;<code>DOCKER_CERT_PATH</code> : The location of your authentication keys.<br />
;<code>DOCKER_DRIVER</code> : The graph driver to use.<br />
;<code>DOCKER_HOST</code> : Daemon socket to connect to.<br />
;<code>DOCKER_NOWARN_KERNEL_VERSION</code> : Prevent warnings that your Linux kernel is unsuitable for Docker.<br />
;<code>DOCKER_RAMDISK</code> : If set this will disable "pivot_root".<br />
;<code>DOCKER_TLS_VERIFY</code> : When set Docker uses TLS and verifies the remote.<br />
;<code>DOCKER_CONTENT_TRUST</code> : When set Docker uses notary to sign and verify images. Equates to <code>--disable-content-trust=false</code> for build, create, pull, push, run.<br />
;<code>DOCKER_CONTENT_TRUST_SERVER</code> : The URL of the Notary server to use. This defaults to the same URL as the registry.<br />
;<code>DOCKER_TMPDIR</code> : Location for temporary Docker files.<br />
<br />
Because Docker is developed using "Go", one can also use any environment variables used by the "Go" runtime. In particular, the following might be useful:<br />
<br />
;<code>HTTP_PROXY</code><br />
;<code>HTTPS_PROXY</code><br />
;<code>NO_PROXY</code><br />
<br />
* Example usage:<br />
$ export DOCKER_API_VERSION=1.19<br />
<br />
==See also==<br />
* [[containerd]]<br />
<br />
==References==<br />
<references/><br />
<br />
==External links==<br />
* [https://www.docker.com/ Official website]<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:DevOps]]<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Docker&diff=8263Docker2023-03-03T21:39:22Z<p>Christoph: /* EXPOSE */</p>
<hr />
<div>'''Docker''' is an open-source project that automates the deployment of applications inside software containers. Quote of features from docker web page:<br />
:Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.<ref>https://www.docker.com/what-docker</ref><br />
<br />
==Introduction==<br />
<br />
''Note: The following is based on content found on the official [https://www.docker.com/what-container Docker website], [[:wikipedia:Docker (software)|Wikipedia]], and various other locations.''<br />
<br />
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.<br />
<br />
; Lightweight : Docker containers running on a single machine share that machine's operating system kernel; they start instantly and use less compute and RAM. Images are constructed from filesystem layers and share common files. This minimizes disk usage and image downloads are much faster.<br />
; Standard : Docker containers are based on open standards and run on all major Linux distributions, Microsoft Windows, and on any infrastructure including VMs, bare-metal and in the cloud.<br />
; Secure : Docker containers isolate applications from one another and from the underlying infrastructure. Docker provides the strongest default isolation to limit app issues to a single container instead of the entire machine.<br />
<br />
As actions are done to a Docker base image, union file-system layers are created and documented, such that each layer fully describes how to recreate an action. This strategy enables Docker's lightweight images, as only layer updates need to be propagated (compared to full VMs, for example).<br />
<br />
Building on top of facilities provided by the Linux kernel (primarily cgroups and namespaces), a Docker container, unlike a virtual machine, does not require or include a separate operating system. Instead, it relies on the kernel's functionality and uses resource isolation for CPU and memory, and separate namespaces to isolate the application's view of the operating system. Docker accesses the Linux kernel's virtualization features directly using the <code>libcontainer</code> library (written in the Go programming language).<br />
<br />
===Comparing Containers and Virtual Machines===<br />
<br />
Containers and virtual machines have similar resource isolation and allocation benefits, but function differently because containers virtualize the operating system instead of hardware. Containers are more portable and efficient.<br />
<br />
; Virtual Machines : Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries and libraries - taking up tens of GBs. VMs can also be slow to boot.<br />
; Containers : Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. Containers take up less space than VMs (container images are typically tens of MBs in size), and start almost instantly.<br />
<br />
===Components===<br />
<br />
The Docker software as a service offering consists of three components:<br />
<br />
; Software : The Docker daemon, called "<code>dockerd</code>" is a persistent process that manages Docker containers and handles container objects. The daemon listens for API requests sent by the Docker Engine API. The Docker client, which identifies itself as "<code>docker</code>", allows users to interact with Docker through CLI. It uses the Docker REST API to communicate with one or more Docker daemons.<br />
; Objects : Docker objects refer to different entities used to assemble an application in Docker. The main Docker objects are images, containers, and services.<br />
:* A Docker container is a standardized, encapsulated environment that runs applications. A container is managed using the Docker API or CLI.<br />
:* A Docker image is a read-only template used to build containers. Images are used to store and ship applications.<br />
:* A Docker service allows containers to be scaled across multiple Docker daemons. The result is known as a "swarm", cooperating daemons that communicate through the Docker API.<br />
; Registries : A Docker registry is a repository for Docker images. Docker clients connect to registries to download ("pull") images for use or upload ("push") images that they have built. Registries can be public or private. Two main public registries are Docker Hub and Docker Cloud. Docker Hub is the default registry where Docker looks for images.<br />
<br />
==Docker commands==<br />
<br />
I will provide detailed examples on all of the following commands throughout this article.<br />
<br />
; Basics<br />
<br />
The following are the most common Docker commands (i.e., the ones you will most likely use the most day-to-day):<br />
<br />
* Show all running containers:<br />
$ docker ps<br />
* Show all containers (including stopped and failed ones):<br />
$ docker ps -a<br />
* Show all images in your local repository:<br />
$ docker images<br />
* Create an image based on the instructions in a <code>Dockerfile</code>:<br />
$ docker build<br />
* Start a container from an image (either from your local repository or from a remote repository {e.g., Docker Hub}):<br />
$ docker run<br />
* Remove/delete all ''stopped''/''failed'' containers (leaves running containers alone):<br />
$ docker rm $(docker ps -a -q)<br />
<br />
===Container commands===<br />
<br />
; Container lifecycle<br />
<br />
* Create a container but do not start it:<br />
$ docker create<br />
* Rename a container:<br />
$ docker rename<br />
* Create ''and'' start a container in one operation:<br />
$ docker run<br />
* Delete a container:<br />
$ docker rm<br />
* Update a container's resource limits:<br />
$ docker update<br />
<br />
; Starting and stopping containers<br />
<br />
* Start a container:<br />
$ docker start<br />
* Stop a running container:<br />
$ docker stop<br />
* Stop and start start a container:<br />
$ docker restart<br />
* Pause a running container ("freeze" it in place):<br />
$ docker pause<br />
* Un-pause a paused container:<br />
$ docker unpause<br />
* Attach/connect to a running container:<br />
$ docker attach<br />
* Block until running container stops (and print exit code):<br />
$ docker wait<br />
* Send <code>SIGKILL</code> to a running container:<br />
$ docker kill<br />
<br />
; Information<br />
<br />
* Show all ''running'' containers:<br />
$ docker ps<br />
* Get the logs for a given container:<br />
$ docker logs<br />
* Get all of the metadata about a container (e.g., IP address, etc.):<br />
$ docker inspect<br />
* Get real-time events from Docker Engine (e.g., start/stop containers, attach, create, etc.):<br />
$ docker events<br />
* Get the public-facing ports of a given container:<br />
$ docker port<br />
* Show running processes in a given container:<br />
$ docker top<br />
* Show a given container's resource usage statistics:<br />
$ docker stats<br />
* Show changed files in the container's filesystem (i.e., those changed from the original base image):<br />
$ docker diff<br />
<br />
; Miscellaneous<br />
<br />
* Get the environment variables for a given container:<br />
$ docker run ubuntu env<br />
* IP address of host machine:<br />
$ ip -4 -o addr show eth0<br />
2: eth0 inet 10.0.0.166/23<br />
* IP address of a container:<br />
$ docker run ubuntu ip -4 -o addr show eth0<br />
2: eth0 inet 172.17.0.2/16<br />
<br />
===Image commands===<br />
<br />
; Lifecycle<br />
* Show all images in your local repository:<br />
$ docker images<br />
* Create an image from a tarball:<br />
$ docker import<br />
* Create an image from a <code>Dockerfile</code><br />
$ docker build<br />
* Create an image from a container (note: it will pause the container, if it is running, during the commit process):<br />
$ docker commit<br />
* Remove/delete an image:<br />
$ docker rmi<br />
* Load an image from a tarball as STDIN (including images and tags):<br />
$ docker load<br />
* Save an image to a tarball (streamed to STDOUT with all parents lays, tags, and versions):<br />
$ docker save<br />
<br />
; Info<br />
<br />
* Show the history of an image:<br />
$ docker history<br />
* Tag an image:<br />
$ docker tag<br />
<br />
==Dockerfile directives==<br />
<br />
=== USER ===<br />
<pre><br />
$ cat << EOF > Dockerfile<br />
# Non-privileged user entry<br />
FROM centos:latest<br />
MAINTAINER xtof@example.com<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
USER xtof<br />
EOF<br />
</pre><br />
''Note: The use of <code>MAINTAINER</code> has been deprecated in newer versions of Docker. You should use <code>LABEL</code> instead, as it is much more flexible and its key/values show up in <code>docker inspect</code>. From here forward, I will only use <code>LABEL</code>.''<br />
<br />
$ docker build -t centos7/nonroot:v1 .<br />
$ docker exec -it <container_name> /bin/bash<br />
<br />
We are user "xtof" and are unable to become root. The workaround (i.e., how to become root) is like so:<br />
<br />
$ docker exec -u 0 -it <container_name> /bin/bash<br />
<br />
''NOTE: For the remainder of this section, I will omit the <code>$ cat << EOF > Dockerfile</code> part in the examples for brevity.''<br />
<br />
=== RUN ===<br />
<br />
Notes on the order of execution<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
USER xtof<br />
<br />
RUN echo "export PATH=/path/to/my/app:$PATH" >> /etc/bashrc<br />
</pre><br />
<br />
$ docker build -t centos7/config:v1 .<br />
...<br />
/bin/sh: /etc/bashrc: Permission denied<br />
<br />
The order of execution matters! Prior to the directive <code>USER xtof</code>, the user was root. After that directive, the user is now xtof, who does not have super-user privileges. Move the <code>RUN echo ...</code> directive to before the <code>USER xtof</code> directive for a successful build.<br />
<br />
=== ENV ===<br />
''Note: The following is a _terrible_ way of building a container. I am purposely doing it this way so I can show you a much better way later (see below).''<br />
<br />
* Build a CentOS 7 Docker image with Java 8 installed:<br />
<pre><br />
# SEE: https://gist.github.com/P7h/9741922 for various Java versions<br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN yum update -y<br />
RUN yum install -y net-tools wget<br />
<br />
RUN echo "SETTING UP JAVA"<br />
# The tarball method:<br />
#RUN cd ~ && wget --no-cookies --no-check-certificate \<br />
# --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \<br />
# "http://download.oracle.com/otn-pub/java/jdk/8u91-b14/jdk-8u91-linux-x64.tar.gz"<br />
#RUN tar xzvf jdk-8u91-linux-x64.tar.gz<br />
#RUN mv jdk1.8.0_91 /opt<br />
#ENV JAVA_HOME /opt/jdk1.8.0_91/<br />
<br />
# The rpm method:<br />
RUN cd ~ && wget --no-cookies --no-check-certificate \<br />
--header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \<br />
"http://download.oracle.com/otn-pub/java/jdk/8u161-b12/2f38c3b165be4555a1fa6e98c45e0808/jdk-8u161-linux-x64.rpm"<br />
RUN yum localinstall -y /root/jdk-8u161-linux-x64.rpm<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
USER xtof<br />
<br />
# User specific environment variable<br />
RUN cd ~ && echo "export JAVA_HOME=/usr/java/jdk1.8.0_161/jre" >> ~/.bashrc<br />
# Global (system-wide) environment variable<br />
ENV JAVA_BIN /usr/java/jdk1.8.0_161/jre/bin<br />
</pre><br />
<br />
$ docker build -t centos7/java8:v1 .<br />
<br />
=== CMD vs. RUN ===<br />
<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
CMD ["echo", "Hello from within my container"]<br />
</pre><br />
<br />
The <code>CMD</code> directive ''only'' executes when the container is started, whereas the <code>RUN</code> directive is executed during the build of the image.<br />
<br />
$ docker build -t centos7/echo:v1 .<br />
$ docker run centos7/echo:v1<br />
Hello from within my container<br />
<br />
The container starts, echos out that message, then exits.<br />
<br />
=== ENTRYPOINT ===<br />
<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
ENTRYPOINT "This command will display this message on EVERY container that is run from it"<br />
</pre><br />
<br />
$ docker build -t centos7/entry:v1 .<br />
$ docker run centos7/entry:v1<br />
This command will display this message on EVERY container that is run from it<br />
$ docker run centos7/entry:v1 /bin/echo "Can you see me?"<br />
This command will display this message on EVERY container that is run from it<br />
$ docker run centos7/echo:v1 /bin/echo "Can you see me?"<br />
Can you see me?<br />
<br />
Note the difference.<br />
<br />
=== EXPOSE ===<br />
<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN yum update -y<br />
RUN yum install -y httpd net-tools<br />
<br />
RUN echo "This is a custom index file built during the image creation" > /var/www/html/index.html<br />
<br />
ENTRYPOINT apachectl -DFOREGROUND # BAD WAY TO DO THIS!<br />
</pre><br />
<br />
$ docker build -t centos7/apache:v1 .<br />
$ docker run -d --name webserver centos7/apache:v1<br />
$ docker exec webserver /bin/cat /var/www/html/index.html<br />
This is a custom index file built during the image creation<br />
$ docker inspect webserver -f '<nowiki>{{.NetworkSettings.IPAddress}}</nowiki>' # => 172.17.0.6<br />
#~OR~<br />
$ docker inspect webserver | jq -crM '.[] | .NetworkSettings.IPAddress' # => 172.17.0.6<br />
$ curl 172.17.0.6<br />
This is a custom index file built during the image creation<br />
$ curl -sI 172.17.0.6 | awk '/^HTTP|^Server/{print}'<br />
HTTP/1.1 200 OK<br />
Server: Apache/2.4.6 (CentOS)<br />
$ time docker stop webserver<br />
real 0m10.275s # <- notice how long it took to stop the container<br />
user 0m0.008s<br />
sys 0m0.000s<br />
$ docker rm webserver<br />
<br />
It took ~10 seconds to stop the above container. This is because of the way we are (incorrectly) using <code>ENTRYPOINT</code>. The <code>SIGTERM</code> signal when running <code>`docker stop webserver`</code> actually timed out instead of exiting gracefully. A much better method is shown below, which ''will'' exit gracefully and in less than 300 ms.<br />
<br />
* Expose ports from the CLI<br />
$ docker run -d --name webserver -p 8080:80 centos7/apache:v1<br />
$ curl localhost:8080<br />
This is a custom index file built during the image creation<br />
$ docker stop webserver && docker rm webserver<br />
<br />
* Explicitly expose a port in the Docker image:<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN yum update -y && \<br />
yum install -y httpd net-tools && \<br />
yum autoremove -y && \<br />
echo "This is a custom index file built during the image creation" > /var/www/html/index.html<br />
<br />
EXPOSE 80<br />
<br />
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]<br />
</pre><br />
<br />
$ docker build -t centos7/apache:v1 .<br />
$ docker run -d --rm --name webserver -P centos7/apache:v1<br />
$ docker container ls --format '<nowiki>{{.Names}} {{.Ports}}</nowiki>'<br />
webserver 0.0.0.0:32769->80/tcp<br />
#~OR~<br />
$ docker port webserver | cut -d: -f2<br />
32769<br />
#~OR~<br />
$ docker inspect webserver | jq -crM '[.[] | .NetworkSettings.Ports."80/tcp"[] | .HostPort] | .[]'<br />
32769<br />
$ curl localhost:32769<br />
This is a custom index file built during the image creation<br />
$ time docker stop webserver<br />
real 0m0.283s<br />
user 0m0.004s<br />
sys 0m0.008s<br />
<br />
Note that I passed <code>--rm</code> to the <code>`docker run`</code> command so that the container will be removed when I stop the container. Also, note how much faster the container stopped (~300ms vs. 10 seconds above).<br />
<br />
=== HEALTHCHECK ===<br />
<br />
The <code>HEALTHCHECK</code> instruction tells Docker how to test a container to ensure it is still working. This can detect cases such as a web server that is stuck in an infinite loop and unable to handle new connections, even though the server process is still running.<br />
<br />
For example, we have a <code>Dockerfile</code> to define a simple webapp:<br />
<pre><br />
$ cat << EOF > Dockerfile<br />
FROM nginx:1.13.1<br />
<br />
RUN apt-get update \<br />
&& apt-get install -y curl \<br />
&& rm -rf /var/lib/apt/lists/*<br />
<br />
HEALTHCHECK --interval=15s --timeout=3s \<br />
CMD curl -fs http://localhost:80/ || exit 1<br />
EOF<br />
</pre><br />
This will check every five minutes or so that a web server is able to serve the site's main page within three seconds. The command's exit status indicates the health status of the container.<br />
<br />
The possible values are:<br />
<br />
* <code>0: success</code> - the container is healthy and ready for use<br />
* <code>1: unhealthy</code> - the container is not working correctly<br />
* <code>2: reserved</code> - do not use this exit code<br />
<br />
Then use Docker to build an image:<br />
<pre><br />
$ docker build -t healthcheck:v1 .<br />
</pre><br />
<br />
And run a container using this image:<br />
<pre><br />
$ docker run -d --name healthcheck-demo -p 80:80 healthcheck:v1<br />
</pre><br />
<br />
Then, check the status of the container:<br />
<pre><br />
$ docker ps<br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
9d8cf6d698c7 healthcheck:v1 "nginx -g 'daemon ..." 3 seconds ago Up 2 seconds (health: starting) 0.0.0.0:80->80/tcp healthcheck-demo<br />
</pre><br />
<br />
At the beginning, the status of container is <code>(health: starting)</code>; after a while, it changes to be healthy:<br />
<pre><br />
$ docker ps <br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
9d8cf6d698c7 healthcheck:v1 "nginx -g 'daemon ..." 2 minutes ago Up 5 minutes (healthy) 0.0.0.0:80->80/tcp healthcheck-demo<br />
</pre><br />
<br />
It takes retries/consecutive failures of the health check for the container to be considered unhealthy.<br />
<br />
You can use your own script to replace the command <code>curl -fs http://localhost:80/ || exit 1</code>.<br />
<br />
What is more, STDOUT and STDERR of your script can be fetched from docker inspect command:<br />
<pre><br />
$ docker inspect --format '{{json .State.Health}}' healthcheck-demo | python -m json.tool<br />
{<br />
"FailingStreak": 0,<br />
"Log": [<br />
{<br />
"End": "2023-03-02T19:39:58.379906565+08:00",<br />
"ExitCode": 0,<br />
"Output": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 612 100 612 0 0 97297 0 --:--:-- --:--:-- --:--:-- 99k\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n",<br />
"Start": "2023-03-02T19:39:58.229550952+08:00"<br />
}<br />
],<br />
"Status": "healthy"<br />
}<br />
</pre><br />
<br />
Note that you can filter containers by health status with:<br />
<pre><br />
$ docker ps -a --filter=health=unhealthy<br />
</pre><br />
<br />
Or, even:<br />
<pre><br />
$ docker ps -a -f status=dead<br />
</pre><br />
<br />
==Container volume management==<br />
<br />
$ docker run -it --name voltest -v /mydata centos:latest /bin/bash<br />
[root@bffdcb88c485 /]# df -h<br />
Filesystem Size Used Avail Use% Mounted on<br />
none 213G 173G 30G 86% /<br />
tmpfs 7.8G 0 7.8G 0% /dev<br />
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup<br />
/dev/mapper/ubuntu--vg-root 213G 173G 30G 86% /mydata<br />
shm 64M 0 64M 0% /dev/shm<br />
tmpfs 7.8G 0 7.8G 0% /sys/firmware<br />
[root@bffdcb88c485 /]# echo "testing" >/mydata/mytext.txt<br />
$ docker inspect voltest | jq -crM '.[] | .Mounts[].Source'<br />
/var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data<br />
$ sudo cat /var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data/mytext.txt<br />
testing<br />
$ sudo /bin/bash -c \<br />
"echo 'this is from the host OS' >/var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data/host.txt"<br />
[root@bffdcb88c485 /]# cat /mydata/host.txt <br />
this is from the host OS<br />
<br />
* Cleanup<br />
$ docker rm voltest<br />
$ docker volume rm 2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b<br />
<br />
* Mount host's current working directory inside container:<br />
$ echo "my config" >my.conf<br />
$ echo "my message" >message.txt<br />
$ echo "aerwr3adf" >app.bin<br />
$ chmod +x app.bin<br />
$ docker run -it --name voltest -v ${PWD}:/mydata centos:latest /bin/bash<br />
[root@f5f34ccb54fb /]# ls -l /mydata/<br />
total 24<br />
-rwxrwxr-x 1 1000 1000 10 Mar 8 19:29 app.bin<br />
-rw-rw-r-- 1 1000 1000 11 Mar 8 19:29 message.txt<br />
-rw-rw-r-- 1 1000 1000 10 Mar 8 19:29 my.conf<br />
[root@f5f34ccb54fb /]# touch /mydata/foobar<br />
$ ls -l ${PWD}<br />
total 24<br />
-rwxrwxr-x 1 xtof xtof 10 Mar 8 11:29 app.bin<br />
-rw-r--r-- 1 root root 0 Mar 8 11:36 foobar<br />
-rw-rw-r-- 1 xtof xtof 11 Mar 8 11:29 message.txt<br />
-rw-rw-r-- 1 xtof xtof 10 Mar 8 11:29 my.conf<br />
$ docker rm voltest<br />
<br />
==Images==<br />
<br />
===Saving and loading images===<br />
<br />
$ docker pull centos:latest<br />
$ docker run -it centos:latest /bin/bash<br />
[root@29fad368048c /]# yum update -y<br />
[root@29fad368048c /]# echo xtof >/root/built_by.txt<br />
$ docker commit reverent_elion centos:xtof<br />
$ docker rm reverent_elion<br />
$ docker images<br />
REPOSITORY TAG IMAGE ID CREATED SIZE<br />
centos xtof e0c8bd35ba50 3 seconds ago 463MB<br />
centos latest 980e0e4c79ec 1 minute ago 197MB<br />
$ docker history centos:xtof<br />
IMAGE CREATED CREATED BY SIZE<br />
e0c8bd35ba50 27 seconds ago /bin/bash 266MB <br />
980e0e4c79ec 18 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B <br />
<missing> 18 months ago /bin/sh -c #(nop) LABEL name=CentOS Base ... 0B <br />
<missing> 18 months ago /bin/sh -c #(nop) ADD file:e336b45186086f7... 197MB <br />
<missing> 18 months ago /bin/sh -c #(nop) MAINTAINER <nowiki>https://gith...</nowiki> 0B<br />
<br />
* Save the original <code>centos:latest</code> image we pulled from Docker Hub:<br />
$ docker save --output centos-latest.tar centos:latest<br />
<br />
Note that the above command essentially tars up the contents of the image found in <code>/var/lib/docker/image</code> directory.<br />
<br />
$ tar tvf centos-latest.tar <br />
-rw-r--r-- 0/0 2309 2016-09-06 14:10 980e0e4c79ec933406e467a296ce3b86685e6b42eed2f873745e6a91d718e37a.json<br />
drwxr-xr-x 0/0 0 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/<br />
-rw-r--r-- 0/0 3 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/VERSION<br />
-rw-r--r-- 0/0 1391 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/json<br />
-rw-r--r-- 0/0 204305920 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/layer.tar<br />
-rw-r--r-- 0/0 202 1969-12-31 16:00 manifest.json<br />
-rw-r--r-- 0/0 89 1969-12-31 16:00 repositories<br />
<br />
* Save space by compressing the tar file:<br />
$ gzip centos-latest.tar # .tar -> 195M; .tar.gz -> 68M<br />
<br />
* Delete the original <code>centos:latest</code> image:<br />
$ docker rmi centos:latest<br />
<br />
* Restore (or load) the image back to our local repository:<br />
$ docker load --input centos-latest.tar.gz<br />
<br />
===Tagging images===<br />
<br />
* List our current images:<br />
$ docker images<br />
REPOSITORY TAG IMAGE ID CREATED SIZE<br />
centos xtof e0c8bd35ba50 About an hour ago 463MB<br />
<br />
* Tag the above image:<br />
$ docker tag e0c8bd35ba50 xtof/centos:v1<br />
$ docker images<br />
REPOSITORY TAG IMAGE ID CREATED SIZE<br />
centos xtof e0c8bd35ba50 About an hour ago 463MB<br />
xtof/centos v1 e0c8bd35ba50 About an hour ago 463MB<br />
<br />
Note that we did not create a new image, we just created a new tag of the same/original <code>centos:xtof</code> image.<br />
<br />
Note: The maximum number of characters in a tag is 128.<br />
<br />
==Docker networking==<br />
<br />
===Default networks===<br />
$ ip addr show docker0<br />
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default <br />
link/ether 02:42:c0:75:70:13 brd ff:ff:ff:ff:ff:ff<br />
inet 172.17.0.1/16 scope global docker0<br />
valid_lft forever preferred_lft forever<br />
inet6 fe80::42:c0ff:fe75:7013/64 scope link <br />
valid_lft forever preferred_lft forever<br />
#~OR~<br />
$ ifconfig docker0<br />
docker0 Link encap:Ethernet HWaddr 02:42:c0:75:70:13 <br />
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0<br />
inet6 addr: fe80::42:c0ff:fe75:7013/64 Scope:Link<br />
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1<br />
RX packets:420654 errors:0 dropped:0 overruns:0 frame:0<br />
TX packets:1162975 errors:0 dropped:0 overruns:0 carrier:0<br />
collisions:0 txqueuelen:0 <br />
RX bytes:85851647 (85.8 MB) TX bytes:1196235716 (1.1 GB)<br />
<br />
$ docker network inspect bridge | jq '.[] | .IPAM.Config[].Subnet'<br />
"172.17.0.0/16"<br />
So, the usable range of IP addresses in our 172.17.0.0/16 subnet is: 172.17.0.1 - 172.17.255.254<br />
<br />
$ docker network ls<br />
NETWORK ID NAME DRIVER SCOPE<br />
bf831059febc bridge bridge local<br />
266f6df5c44e host host local<br />
ce79e4043a20 none null local<br />
$ docker ps -q | wc -l<br />
#~OR~<br />
$ docker container ls --format '<nowiki>{{.Names}}</nowiki>' | wc -l<br />
4 # => 4 running containers<br />
$ docker network inspect bridge | jq '.[] | .Containers[].IPv4Address'<br />
"172.17.0.2/16"<br />
"172.17.0.5/16"<br />
"172.17.0.4/16"<br />
"172.17.0.3/16"<br />
The output from the last command are the IP addresses of the 4 containers currently running on my host.<br />
<br />
===Custom networks===<br />
* Create a Docker network<br />
$ man docker-network-create # for details<br />
$ docker network create --subnet 10.1.0.0/16 --gateway 10.1.0.1 --ip-range=10.1.4.0/24 \<br />
--driver=bridge --label=host4network br04<br />
<br />
* Use the above network with a given container:<br />
$ docker run -it --name net-test --net br04 centos:latest /bin/bash<br />
<br />
* Assign a static IP to a given container in the above (user created) network:<br />
$ docker run -it --name net-test --net br04 --ip 10.1.4.100 centos:latest /bin/bash<br />
<br />
Note: You can ''only'' assign static IPs to user created networks (i.e., you ''cannot'' assign them to the default "bridge" network).<br />
<br />
==Monitoring==<br />
<br />
$ docker top <container_name><br />
$ docker stats <container_name><br />
<br />
===Logs===<br />
<br />
* Fetch logs of a given container:<br />
$ docker logs <container_name><br />
<br />
* Fetch logs of a given container prefixed with timestamps (UTC format by default):<br />
$ docker logs --timestamps <container_name><br />
<br />
===Events===<br />
$ docker events<br />
$ docker events --since '1h'<br />
$ docker events --since '2018-03-08T16:00'<br />
$ docker events --filter event=attach<br />
$ docker events --filter event=destroy<br />
$ docker events --filter event=attach --filter event=die --filter event=stop<br />
<br />
==Cleanup==<br />
<br />
* Check local system disk usage:<br />
<pre><br />
$ docker system df<br />
TYPE TOTAL ACTIVE SIZE RECLAIMABLE<br />
Images 53 3 16.52GB 15.9GB (96%)<br />
Containers 3 1 438.9MB 0B (0%)<br />
Local Volumes 16 2 2.757GB 2.628GB (95%)<br />
Build Cache 0 0 0B 0B<br />
</pre><br />
<br />
Note: Use <code>docker system df --verbose</code> to get even more details.<br />
<br />
* Delete all stopped containers at once and reclaim the disk space they are using:<br />
$ docker container prune<br />
<br />
* Remove all containers (both the running ones and the stopped ones):<br />
<pre><br />
# Old method:<br />
$ docker rm -f $(docker ps -aq)<br />
# New method:<br />
$ docker container rm -f $(docker container ls -aq)<br />
</pre><br />
Note: It is often useful to use the <code>--rm</code> flag when running a container so that it is automatically removed when its PID 1 process is stopped, thus releasing unused disk immediately.<br />
<br />
* Cleanup everything all at one ('''CAREFUL!'''):<br />
<pre><br />
$ docker system prune<br />
WARNING! This will remove:<br />
- all stopped containers<br />
- all networks not used by at least one container<br />
- all dangling images<br />
- all dangling build cache<br />
Are you sure you want to continue? [y/N]<br />
</pre><br />
<br />
==Examples==<br />
<br />
===Simple Nginx server===<br />
<br />
* Create an index.html file:<br />
<pre><br />
$ mkdir html<br />
$ cat << EOF >html/index.html<br />
Hello from Docker<br />
EOF<br />
</pre><br />
<br />
* Create a Dockerfile:<br />
<pre><br />
FROM nginx<br />
COPY html /usr/share/nginx/html<br />
</pre><br />
<br />
* Build the image:<br />
$ docker build -t test-nginx .<br />
<br />
* Start up container, using image built above:<br />
$ docker run --name check-nginx -d -p 8080:80 test-nginx<br />
<br />
* Check that it works:<br />
$ curl <nowiki>http://localhost:8080</nowiki><br />
Hello from Docker<br />
<br />
===Connecting two containers===<br />
<br />
In this example, we will start up a Postgres container and then start up another container and make a connection to the original Postgres container:<br />
<br />
$ docker pull postgres<br />
$ docker run --name test-postgres -e POSTGRES_PASSWORD=mypassword -d postgres<br />
$ docker run -it --rm --link test-postgres:postgres postgres psql -h postgres -U postgres<br />
<pre><br />
Password for user postgres:<br />
psql (11.0 (Debian 11.0-1.pgdg90+2))<br />
Type "help" for help.<br />
<br />
postgres=# SELECT 1;<br />
?column?<br />
----------<br />
1<br />
(1 row)<br />
<br />
postgres=# \q<br />
</pre><br />
<br />
Connection was successful!<br />
<br />
===Support for various hardware platforms===<br />
<br />
NOTE: If your image is being created on an M1 chip (ARM64) but you want to execute the container on an AMD64 chip, then use <code>FROM - platform=linux/amd64</code> on your Docker image so it can be shipped anywhere. For example:<br />
<pre><br />
FROM node:current-alpine3.15<br />
#FROM - platform=linux/amd64 node:current-alpine3.15<br />
WORKDIR /app<br />
ADD . /app<br />
RUN npm install<br />
#RUN npm install express<br />
EXPOSE 3000<br />
CMD ["npm", "start"]<br />
</pre><br />
<br />
==Docker compose==<br />
<br />
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the [https://docs.docker.com/compose/overview/#features list of features].<br />
<br />
Using Compose is basically a three-step process:<br />
# Define your app's environment with a <code>Dockerfile</code>. so it can be reproduced anywhere.<br />
# Define the services that make up your app in <code>docker-compose.yml</code> so they can be run together in an isolated environment.<br />
# Run <code>docker-compose up</code> and Compose starts and runs your entire app.<br />
<br />
===Basic example===<br />
<br />
''Note: This is based off of [https://docs.docker.com/compose/gettingstarted/ this article].''<br />
<br />
In this basic example, we will build a simple Python web application running on Docker Compose. The application uses the Flask framework and maintains a hit counter in Redis.<br />
<br />
''Note: This section assumes you already have Docker Engine and [https://docs.docker.com/compose/install/#install-compose Docker Compose] installed.''<br />
<br />
* Create a directory for the project:<br />
$ mkdir compose-test && cd $_<br />
<br />
* Create a file called <code>app.py</code> in your project directory and paste this in:<br />
<pre><br />
import time<br />
import redis<br />
from flask import Flask<br />
<br />
<br />
app = Flask(__name__)<br />
cache = redis.Redis(host='redis', port=6379)<br />
<br />
<br />
def get_hit_count():<br />
retries = 5<br />
while True:<br />
try:<br />
return cache.incr('hits')<br />
except redis.exceptions.ConnectionError as exc:<br />
if retries == 0:<br />
raise exc<br />
retries -= 1<br />
time.sleep(0.5)<br />
<br />
<br />
@app.route('/')<br />
def hello():<br />
count = get_hit_count()<br />
return 'Hello World! I have been seen {} times.\n'.format(count)<br />
<br />
if __name__ == "__main__":<br />
app.run(host="0.0.0.0", debug=True)<br />
</pre><br />
<br />
In this example, <code>redis</code> is the hostname of the redis container on the application's network. We use the default port for Redis: <code>6379</code>.<br />
<br />
* Create another file called <code>requirements.txt</code> in your project directory and paste this in:<br />
flask<br />
redis<br />
<br />
* Create a Dockerfile<br />
*: This Dockerfile will be used to build an image that contains all the dependencies the Python application requires, including Python itself.<br />
<pre><br />
FROM python:3.4-alpine<br />
ADD . /code<br />
WORKDIR /code<br />
RUN pip install -r requirements.txt<br />
CMD ["python", "app.py"]<br />
</pre><br />
<br />
* Create a file called <code>docker-compose.yml</code> in your project directory and paste the following:<br />
<pre><br />
version: '3'<br />
services:<br />
web:<br />
build: .<br />
ports:<br />
- "5000:5000"<br />
redis:<br />
image: "redis:alpine"<br />
</pre><br />
<br />
* Build and run this app with Docker Compose:<br />
$ docker-compose up<br />
<br />
Compose pulls a Redis image, builds an image for your code, and starts the services you defined. In this case, the code is statically copied into the image at build time.<br />
<br />
* Test the application:<br />
$ curl localhost:5000<br />
Hello World! I have been seen 1 times.<br />
<br />
$ for i in $(seq 1 10); do curl -s localhost:5000; done<br />
Hello World! I have been seen 2 times.<br />
Hello World! I have been seen 3 times.<br />
Hello World! I have been seen 4 times.<br />
Hello World! I have been seen 5 times.<br />
Hello World! I have been seen 6 times.<br />
Hello World! I have been seen 7 times.<br />
Hello World! I have been seen 8 times.<br />
Hello World! I have been seen 9 times.<br />
Hello World! I have been seen 10 times.<br />
Hello World! I have been seen 11 times.<br />
<br />
* List containers:<br />
<pre><br />
$ docker-compose ps<br />
Name Command State Ports <br />
-------------------------------------------------------------------------------------<br />
compose-test_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp <br />
compose-test_web_1 python app.py Up 0.0.0.0:5000->5000/tcp<br />
</pre><br />
<br />
* Display the running processes:<br />
<pre><br />
$ docker-compose top<br />
compose-test_redis_1<br />
UID PID PPID C STIME TTY TIME CMD <br />
--------------------------------------------------------------------<br />
systemd+ 29401 29367 0 15:28 ? 00:00:00 redis-server <br />
<br />
compose-test_web_1<br />
UID PID PPID C STIME TTY TIME CMD <br />
--------------------------------------------------------------------------------<br />
root 29407 29373 0 15:28 ? 00:00:00 python app.py <br />
root 29545 29407 0 15:28 ? 00:00:00 /usr/local/bin/python app.py<br />
</pre><br />
<br />
* Shutdown app:<br />
$ Ctrl+C<br />
#~OR~<br />
$ docker-compose down<br />
<br />
==Install docker==<br />
<br />
===Debian-based distros===<br />
<br />
; Ubuntu 16.04 (Xenial Xerus)<br />
''Note: For this install, I will be using Ubuntu 16.04 LTS (Xenial Xerus). Docker requires a 64-bit version of Ubuntu as well as a kernel version equal to or greater than 3.10. My system satisfies both requirements.''<br />
<br />
* Setup the docker repo to install from:<br />
$ sudo apt-get update -y<br />
$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D<br />
$ echo "deb <nowiki>https://apt.dockerproject.org/repo ubuntu-xenial main</nowiki>" | sudo tee /etc/apt/sources.list.d/docker.list<br />
$ sudo apt-get update -y<br />
<br />
Make sure you are about to install from the Docker repo instead of the default Ubuntu 16.04 repo:<br />
<br />
$ apt-cache policy docker-engine<br />
<br />
The output of the above command show look something like the following:<br />
<pre><br />
docker-engine:<br />
Installed: (none)<br />
Candidate: 17.05.0~ce-0~ubuntu-xenial<br />
Version table:<br />
17.05.0~ce-0~ubuntu-xenial 500<br />
500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages<br />
17.04.0~ce-0~ubuntu-xenial 500<br />
500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages<br />
...<br />
</pre><br />
<br />
* Install docker:<br />
$ sudo apt-get install -y docker-engine<br />
<br />
; Ubuntu 18.04 (Bionic Beaver)<br />
<br />
$ sudo apt update<br />
$ sudo apt install -y apt-transport-https ca-certificates curl software-properties-common<br />
$ curl -fsSL <nowiki>https://download.docker.com/linux/ubuntu/gpg</nowiki> | sudo apt-key add -<br />
$ sudo add-apt-repository "deb [arch=amd64] <nowiki>https://download.docker.com/linux/ubuntu</nowiki> $(lsb_release -cs) stable"<br />
$ sudo apt update<br />
$ apt-cache policy docker-ce<br />
<pre><br />
docker-ce:<br />
Installed: (none)<br />
Candidate: 5:18.09.0~3-0~ubuntu-bionic<br />
Version table:<br />
5:18.09.0~3-0~ubuntu-bionic 500<br />
500 <nowiki>https://download.docker.com/linux/ubuntu</nowiki> bionic/stable amd64 Packages<br />
</pre><br />
<br />
$ sudo apt install docker-ce -y<br />
$ sudo systemctl status docker<br />
<pre><br />
● docker.service - Docker Application Container Engine<br />
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)<br />
Active: active (running) since Tue 2018-12-04 13:40:36 PST; 4s ago<br />
Docs: https://docs.docker.com<br />
Main PID: 6134 (dockerd)<br />
Tasks: 16<br />
CGroup: /system.slice/docker.service<br />
└─6134 /usr/bin/dockerd -H unix://<br />
</pre><br />
<br />
===Red Hat-based distros===<br />
''Note: For this install, I will be using CentOS 7 (release 7.2.1511). Docker requires a 64-bit version of CentOS as well as a kernel version equal to or greater than 3.10. My system satisfies both requirements.''<br />
<br />
* Install Docker (the fast way):<br />
$ sudo yum update -y<br />
$ curl -fsSL <nowiki>https://get.docker.com/</nowiki> | sh<br />
<br />
* Install Docker (via a yum repo):<br />
$ sudo yum update -y<br />
$ sudo pip install docker-py<br />
$ cat << EOF > /etc/yum.repos.d/docker.repo<br />
[dockerrepo]<br />
name=Docker Repository<br />
baseurl=<nowiki>https://yum.dockerproject.org/repo/main/centos/7/</nowiki><br />
enabled=1<br />
gpgcheck=1<br />
gpgkey=<nowiki>https://yum.dockerproject.org/gpg</nowiki><br />
EOF<br />
$ sudo rpm -vv --import <nowiki>https://yum.dockerproject.org/gpg</nowiki><br />
$ sudo yum update -y<br />
$ sudo yum install docker-engine -y<br />
<br />
===Post-installation steps===<br />
* Check on the status of docker:<br />
$ sudo systemctl status docker<br />
<pre><br />
● docker.service - Docker Application Container Engine<br />
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)<br />
Active: active (running) since Tue 2016-07-12 12:31:08 PDT; 6s ago<br />
Docs: https://docs.docker.com<br />
Main PID: 3392 (docker)<br />
CGroup: /system.slice/docker.service<br />
├─3392 /usr/bin/docker daemon -H fd://<br />
└─3411 docker-containerd -l /var/run/docker/libcontainerd/docker-containerd.sock --runtime docker-runc --start-timeout 2m<br />
</pre><br />
<br />
* Make sure the docker service automatically starts after a machine reboot:<br />
$ sudo systemctl enable docker<br />
<br />
* Execute docker without <code>`sudo`</code>:<br />
$ sudo usermod -aG docker $(whoami)<br />
#~OR~<br />
$ sudo usermod -aG docker $USER<br />
Log out and log back in to use docker without <code>`sudo`</code>.<br />
<br />
* Check version of Docker installed:<br />
<pre><br />
$ docker version<br />
Client:<br />
Version: 17.05.0-ce<br />
API version: 1.29<br />
Go version: go1.7.5<br />
Git commit: 89658be<br />
Built: Thu May 4 22:10:54 2017<br />
OS/Arch: linux/amd64<br />
<br />
Server:<br />
Version: 17.05.0-ce<br />
API version: 1.29 (minimum version 1.12)<br />
Go version: go1.7.5<br />
Git commit: 89658be<br />
Built: Thu May 4 22:10:54 2017<br />
OS/Arch: linux/amd64<br />
Experimental: false<br />
</pre><br />
<br />
* Check that docker has been successfully installed and configured:<br />
$ docker run hello-world<br />
<pre><br />
...<br />
This message shows that your installation appears to be working correctly.<br />
...<br />
</pre><br />
<br />
As the above message shows, you now have a successful install of Docker on your machine and are ready to start building images and creating containers.<br />
<br />
==Miscellaneous==<br />
<br />
* Get the hostname of the host the Docker Engine is running on:<br />
$ docker info -f '<nowiki>{{ .Name }}</nowiki>'<br />
<br />
* Get the number of stopped containers:<br />
$ docker info --format '<nowiki>{{json .}}</nowiki>' | jq '.ContainersStopped'<br />
3<br />
<br />
* Get the number of images in the local registry:<br />
$ docker info --format '<nowiki>{{json .}}</nowiki>' | jq '.Images'<br />
92<br />
<br />
* Verify the Docker service is running:<br />
<pre><br />
$ curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping<br />
OK<br />
</pre><br />
<br />
* Show docker disk usage<br />
<pre><br />
$ docker system df<br />
TYPE TOTAL ACTIVE SIZE RECLAIMABLE<br />
Images 84 11 25.01GB 20.44GB (81%)<br />
Containers 20 0 768.1MB 768.1MB (100%)<br />
Local Volumes 16 2 2.693GB 2.628GB (97%)<br />
Build Cache 0 0 0B 0B<br />
</pre><br />
<br />
* Just ''just'' the version of Docker installed:<br />
<pre><br />
$ docker version --format '{{.Server.Version}}'<br />
20.10.7<br />
$ docker version --format '{{.Server.Version}}' 2>/dev/null || docker -v | awk '{gsub(/,/, "", $3); print $3}'<br />
20.10.7<br />
</pre><br />
<br />
==Install your own Docker private registry==<br />
''Note: I will use CentOS 7 for this install and assume you already have docker and docker-compose installed (see above).''<br />
<br />
For this install, I will assume you have a domain name registered somewhere. I will use <code>docker.example.com</code> as my example domain. Replace anywhere you see that below with your actual domain name.<br />
<br />
* Install dependencies:<br />
$ yum install -y nginx # used for the registry endpoint<br />
$ yum install -y httpd-tools # for the htpasswd utility<br />
<br />
* Setup docker registry directory structure:<br />
$ mkdir -p /opt/docker-registry/{data,nginx{/conf.d,/certs},log}<br />
$ cd /opt/docker-registry<br />
<br />
* Create a docker-compose file:<br />
$ vim docker-compose.yml # and add the following:<br />
<br />
<pre><br />
nginx:<br />
image: "nginx:1.9"<br />
ports:<br />
- 5043:443<br />
links:<br />
- registry:registry<br />
volumes:<br />
- ./log/nginx/:/var/log/nginx:rw<br />
- ./nginx/conf.d:/etc/nginx/conf.d:ro<br />
- ./nginx/certs:/etc/nginx/certs:ro<br />
registry:<br />
image: registry:2<br />
ports:<br />
- 127.0.0.1:5000:5000<br />
environment:<br />
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data<br />
volumes:<br />
- ./data:/data<br />
</pre><br />
<br />
* Create an Nginx configuration file:<br />
$ vim /opt/docker-registry/nginx/conf.d/registry.conf # and add the following:<br />
<br />
<pre><br />
upstream docker-registry {<br />
server registry:5000;<br />
}<br />
<br />
server {<br />
listen 443;<br />
server_name docker.example.com;<br />
<br />
# SSL<br />
ssl on;<br />
ssl_certificate /etc/nginx/certs/docker.example.com.crt;<br />
ssl_certificate_key /etc/nginx/certs/docker.example.com.key;<br />
<br />
# disable any limits to avoid HTTP 413 for large image uploads<br />
client_max_body_size 0;<br />
<br />
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)<br />
chunked_transfer_encoding on;<br />
<br />
location /v2/ {<br />
# Do not allow connections from docker 1.5 and earlier<br />
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents<br />
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {<br />
return 404;<br />
}<br />
<br />
proxy_pass http://docker-registry;<br />
proxy_set_header Host $http_host; # required for docker client's sake<br />
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP<br />
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;<br />
proxy_set_header X-Forwarded-Proto $scheme;<br />
proxy_read_timeout 900;<br />
<br />
add_header 'Docker-Distribution-Api-Version:' 'registry/2.0' always;<br />
<br />
# To add basic authentication to v2 use auth_basic setting plus add_header<br />
auth_basic "Restricted access to Docker Registry";<br />
auth_basic_user_file /etc/nginx/conf.d/registry.htpasswd;<br />
}<br />
}<br />
</pre><br />
<br />
$ cd /opt/docker-registry/nginx/conf.d<br />
$ htpasswd -c registry.htpasswd <username> # replace <username> with your actual username<br />
$ htpasswd registry.htpasswd <username2> # [optional] add a 2nd user<br />
<br />
* Setup your own certificate signing authority (for use with SSL):<br />
<br />
$ cd /opt/docker-registry/nginx/certs<br />
<br />
* Generate a new root key:<br />
<br />
$ openssl genrsa -out docker-registry-CA.key 2048<br />
<br />
* Generate a root certificate (enter anything you like at the prompts):<br />
<br />
$ openssl req -x509 -new -nodes -key docker-registry-CA.key -days 3650 -out docker-registry-CA.crt<br />
<br />
Then generate a key for your server (this is the file referenced by <code>ssl_certificate_key</code> in the Nginx configuration above):<br />
<br />
$ openssl genrsa -out docker.example.com.key 2048<br />
<br />
Now we have to make a certificate signing request (CSR). After you type the following command, OpenSSL will prompt you to answer a few questions. Enter anything you like for the first few, however, when OpenSSL prompts you to enter the "Common Name", make sure to enter the domain or IP of your server.<br />
<br />
$ openssl req -new -key docker.example.com.key -out docker.example.com.csr<br />
<br />
* Sign the certificate request:<br />
<br />
$ openssl x509 -req -in docker.example.com.csr -CA docker-registry-CA.crt -CAkey docker-registry-CA.key -CAcreateserial -out docker.example.com.crt -days 3650<br />
<br />
* Force any clients that will use the certificate authority we created above to accept that it is a "legitimate" certificate. Run the following commands on the Docker registry server and on any hosts that will be communicating with the Docker registry server:<br />
<br />
$ sudo cp /opt/docker-registry/nginx/certs/docker-registry-CA.crt /usr/local/share/ca-certificates/<br />
$ sudo update-ca-trust<br />
<br />
* Restart the Docker daemon in order for it to pick up the changes to the certificate store:<br />
<br />
$ sudo systemctl restart docker.service<br />
<br />
* Bring up the associated Docker containers:<br />
$ docker-compose up -d<br />
<br />
* Your Docker registry directory structure should look like the following:<br />
<pre><br />
$ cd /opt/docker-registry && tree .<br />
.<br />
├── data<br />
├── docker-compose.yml<br />
├── log<br />
│ └── nginx<br />
│ ├── access.log<br />
│ └── error.log<br />
└── nginx<br />
├── certs<br />
│ ├── docker-registry-CA.crt<br />
│ ├── docker-registry-CA.key<br />
│ ├── docker-registry-CA.srl<br />
│ ├── docker.example.com.crt<br />
│ ├── docker.example.com.csr<br />
│ └── docker.example.com.key<br />
└── conf.d<br />
├── registry.conf<br />
└── registry.htpasswd<br />
</pre><br />
<br />
* To access the private Docker registry from a client machine (any machine, really), first add the SSL certificate you created earlier to the client machine:<br />
<br />
$ cat /opt/docker-registry/nginx/certs/docker-registry-CA.crt # copy contents<br />
# On client machine:<br />
$ sudo vim /usr/local/share/ca-certificates/docker-registry-CA.crt # paste contents<br />
$ sudo update-ca-certificates # You should see "1 added" in the output<br />
<br />
* Restart Docker on the client machine to make sure it reloads the system's CA certificates:<br />
<br />
$ sudo service docker restart<br />
<br />
* Test that you can reach your private Docker registry:<br />
$ curl -k <nowiki>https://USERNAME:PASSWORD@docker.example.com:5043/v2/</nowiki><br />
{} # <- proper output<br />
<br />
* Now, test that you can login with Docker:<br />
$ docker login <nowiki>https://docker.example.com:5043</nowiki><br />
<br />
If that returns with "Login Succeeded", your private Docker registry is up and running!<br />
<br />
'''This section is incomplete. It will be updated when I have time.'''<br />
<br />
==Docker environment variables==<br />
''Note: See [https://docs.docker.com/engine/reference/commandline/cli/#environment-variables here] for the most up-to-date list of environment variables.''<br />
<br />
The following list of environment variables are supported by the docker command line:<br />
<br />
;<code>DOCKER_API_VERSION</code> : The API version to use (e.g., 1.19)<br />
;<code>DOCKER_CONFIG</code> : The location of your client configuration files.<br />
;<code>DOCKER_CERT_PATH</code> : The location of your authentication keys.<br />
;<code>DOCKER_DRIVER</code> : The graph driver to use.<br />
;<code>DOCKER_HOST</code> : Daemon socket to connect to.<br />
;<code>DOCKER_NOWARN_KERNEL_VERSION</code> : Prevent warnings that your Linux kernel is unsuitable for Docker.<br />
;<code>DOCKER_RAMDISK</code> : If set this will disable "pivot_root".<br />
;<code>DOCKER_TLS_VERIFY</code> : When set Docker uses TLS and verifies the remote.<br />
;<code>DOCKER_CONTENT_TRUST</code> : When set Docker uses notary to sign and verify images. Equates to <code>--disable-content-trust=false</code> for build, create, pull, push, run.<br />
;<code>DOCKER_CONTENT_TRUST_SERVER</code> : The URL of the Notary server to use. This defaults to the same URL as the registry.<br />
;<code>DOCKER_TMPDIR</code> : Location for temporary Docker files.<br />
<br />
Because Docker is developed using "Go", one can also use any environment variables used by the "Go" runtime. In particular, the following might be useful:<br />
<br />
;<code>HTTP_PROXY</code><br />
;<code>HTTPS_PROXY</code><br />
;<code>NO_PROXY</code><br />
<br />
* Example usage:<br />
$ export DOCKER_API_VERSION=1.19<br />
<br />
==See also==<br />
* [[containerd]]<br />
<br />
==References==<br />
<references/><br />
<br />
==External links==<br />
* [https://www.docker.com/ Official website]<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:DevOps]]<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Travel_Log&diff=8262Category:Travel Log2023-02-08T06:25:49Z<p>Christoph: /* Flights */</p>
<hr />
<div>This category will be my, as yet, unorganised '''Travel Log''' to many places around the world. (Note: The following is very much an ''incomplete'' travel log.)<br />
<br />
== Auto ==<br />
<br />
===Berlin trip (2006)===<br />
* Monaco &rarr; Milano &rarr; Ljubljana &rarr; Rotterdam &rarr; Berlin &rarr; Copenhagen &rarr; Monaco: April 2006<br />
: [http://triptracker.net/trip/1165/ TripTracker]<br />
: 1-Apr-2006 (14h20): Monaco &rarr; Milano<br />
: 2-Apr-2006 (23h30): Milano &rarr; Ljubljana<br />
: 3-Apr-2006 &ndash; 5-Apr-2006: Slovenia (Ljubljana, Novo Mesto, Kranj, Postojna, Jesenice, etc.)<br />
: 5-Apr-2006 (12h30): |&larr; Austria (Villach)<br />
: 5-Apr-2006 (15h15): |&larr; Germany<br />
: 5-Apr-2006 (19h15): Stuttgart<br />
: 5-Apr-2006 (20h20): Karlsruhe<br />
: 5-Apr-2006 (23h30): Köln<br />
: 5-Apr-2006 (00h10): |&larr; The Netherlands<br />
: 5-Apr-2006 (02h00): Rotterdam<br />
: 7-Apr-2006 (12h00): |&rarr; Rotterdam<br />
: 7-Apr-2006 (14h45): |&larr; Germany<br />
: 7-Apr-2006 (17h00): Hannover<br />
: 7-Apr-2006 (18h30): Magdeburg<br />
: 7-Apr-2006 (20h00): Berlin<br />
: 8-Apr-2006 (15h30): |&rarr; Berlin<br />
: 8-Apr-2006 (18h00): Rostock<br />
: 8-Apr-2006 (19h30): Ferry (|&rarr; Germany from Rostock Harb.)<br />
: 8-Apr-2006 (21h15): Ferry (|&larr; Denmark at Gedsen)<br />
: 8-Apr-2006 (23h20): København<br />
: 9-Apr-2006 (06h30): |&rarr; København<br />
: 9-Apr-2006 (09h00): Ferry (|&rarr; Denmark from Gedsen)<br />
: 9-Apr-2006 (11h00): Ferry (|&larr; Germany at Rostock Harb.)<br />
: 9-Apr-2006 (13h30): |&larr; Berlin<br />
: 9-Apr-2006 (14h00): |&rarr; Berlin<br />
: 9-Apr-2006 (15h50): Dresden<br />
:10-Apr-2006 (00h45): |&larr; Slovenia<br />
:10-Apr-2006 (01h40): Ljubljana<br />
:10-Apr-2006 (02h40): Postojna<br />
:10-Apr-2006 (13h15): |&larr; Italy<br />
:10-Apr-2006 (15h00): Padova<br />
:10-Apr-2006 (15h40): Verona<br />
:10-Apr-2006 (18h50): Genova<br />
:10-Apr-2006 (20h35): |&larr; France<br />
:10-Apr-2006 (20h45): |&larr; Monaco<br />
<br />
===Canada trip (2001)===<br />
''Note: The total trip covered 11,893 km (7,390 miles).''<br />
*Corvallis, OR &rarr; Boston, MA &rarr; Quebec &rarr; Ontario &rarr; Manitoba &rarr; Saskatchewan &rarr; Alberta &rarr; British Columbia &rarr; Corvallis, OR<br />
** 01-Sep-2001 (??h??): |&rarr; Corvallis, OR<br />
** 06-Sep-2001 (15h45): |&larr; Massachusetts<br />
** 13-Sep-2001 (13h15): |&rarr; Westborough, MA<br />
** 13-Sep-2001 (17h46): Augusta, ME<br />
** 13-Sep-2001 (18h15): |&larr; CANADA (into Quebec)<br />
** 14-Sep-2001 (02h06): Grande Allee Est., Quebec<br />
** 14-Sep-2001 (15h01): Cap-Madeleine, PQ<br />
** 15-Sep-2001 (17h44): Thunder Bay, ON<br />
** 14-Sep-2001 (17h45): |&larr; Ontario<br />
** 14-Sep-2001 (20h03): Cobden, ON<br />
** 15-Sep-2001 (12h02): Sudbury, ON<br />
** 15-Sep-2001 (10h25): Wawa, ON<br />
** 15-Sep-2001 (22h01): Kenora, ON<br />
** 15-Sep-2001 (10h37): |&larr; Manitoba<br />
** 16-Sep-2001 (10h53): Brandon, MB<br />
** 16-Sep-2001 (12h50): |&larr; Saskatchewan<br />
** 16-Sep-2001 (16h09): Herbert, SK<br />
** 16-Sep-2001 (18h06): |&larr; Alberta<br />
** 16-Sep-2001 (23h00): |&larr; British Columbia<br />
** 17-Sep-2001 (00h30): |&larr; USA (into Idaho)<br />
** 17-Sep-2001 (03h36): Coeur d'Alene, ID<br />
** 17-Sep-2001 (05h30): |&larr; Oregon<br />
<br />
===Ireland trip (1999-2000)===<br />
* 26-Dec-1999 (??h??): Dublin, Ireland<br />
* 26-Dec-1999 (16h13): Lord Edward St., Dublin<br />
* 27-Dec-1999 (??h??): Kinlay House, Christchurch, 2-12 Lord Edward St., Dublin, Ireland<br />
* 2?-Dec-1999 (??h??): Kilkenny<br />
* 28-Dec-1999 (12h27): Patrick St., Cork<br />
* 28-Dec-1999 (17h12): Mallow, Co. Cork<br />
* 29-Dec-1999 (??h??): Co. Kerry<br />
* ??-Dec-1999 (??h??): Saratoga House (Bed & Breakfast), Muckross Road, Killarney, Ireland<br />
* 29-Dec-1999 (15h09): Chapel St., Limerick<br />
* 29-Dec-1999 (15h18): Eimear<br />
* 30-Dec-1999 (??h??): Ballybofey<br />
* 30-Dec-1999 (15h51): Greysteel<br />
* 30-Dec-1999 (??h??): O'Connell St., Sligo<br />
* 30-Dec-1999 (??h??): Petra, Galway<br />
* 30-Dec-1999 (??h??): Sligo<br />
* 30-Dec-1999 (??h??): The Linen House Backpackers Hostel, 18-20 Kent Street, Belfast, Ireland<br />
* 01-Jan-2000 (14h46): Arthur Sq., Belfast<br />
* 02-Jan-2000 (06h34): Dublin Airport<br />
<br />
===Miscellaneous (Europe)===<br />
* Budapest, Hungary &rarr; Dubrovnik, Croatia: June/July 2018 (round-trip)<br />
* ''The Cliffs of Møn'', DK: Oct-2005<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Chiemsee, Germany: Oct-1996 (round-trip)<br />
* Zagreb, Croatia &rarr; Ljubjlana, Slovenia &rarr; Graz, Austria &rarr; Budapest, Hungary: Sep-1996<br />
* Zagreb, Croatia &rarr; Ljubljana, Slovenia: Sep-1996 (round-trip)<br />
* Budapest, Hungary &rarr; Zagreb, Croatia: Sep-1996<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Berchtesgaden, Germany &rarr; Innsbruck, Austria &rarr; Liechtenstein &rarr; Switzerland: Aug-1996 (round-trip)<br />
* Warsaw, Poland &rarr; Budapest, Hungary: September 1994<br />
* Budapest, Hungary &rarr; Slovakia (11-Nov-1993) &rarr; Warsaw, Poland: November 1993<br />
* Vienna, Austria &rarr; Budapest, Hungary: 28-Sep-1993<br />
<br />
===Miscellaneous (South America)===<br />
* Cuenca, Ecuador &rarr; Riobamba, Ecuador &rarr; Ambato, Ecuador &rarr; Quito, Ecuador: 1993 (round-trip)<br />
* Quito, Ecuador &#187; Ipiales, Colombia: 1993 (round-trip)<br />
* Guayaquil, Ecuador &rarr; Santo Domingo de Los Colorados, Ecuador &rarr; Quito, Ecuador: 1993<br />
* Guayaquil, Ecuador &rarr; Salinas, Ecuador: 1993 (round-trip)<br />
* Tumbes, Peru &rarr; Guayaquil, Ecuador: 21-Dec-1992<br />
<br />
===Miscellaneous (North America)===<br />
* Seattle, WA &#187; Winthrop, WA &#187; Leavenworth, WA &#187; Issaquah, WA &#187; Seattle, WA: June 2022<br />
* Seattle, WA &#187; Winthrop, WA &#187; Tiger, WA &#187; Spokane, WA &#187; Seattle, WA: May 2022 (1,200 km/744 mi)<br />
* Seattle, WA &#187; Portland, OR &#187; Grants Pass, OR &#187; Crescent City, CA &#187; Redwood National Forest &#187; Newport, OR &#187; Astoria, OR &#187; Elma, WA &#187; Seattle, WA: November 2021 (1,881 km/1,169 mi)<br />
* Seattle, WA &#187; Mt Saint Helens &#187; Mt Adams &#187; Stonehenge Memorial &#187; Multnomah Falls &#187; Seattle, WA: September 2021 (914 km/568 mi)<br />
* Seattle, WA &#187; Walla Walla, OR &#187; Joseph, OR &#187; Lewiston, ID &#187; Grand Coulee, WA &#187; Seattle, WA: June 2021 (1,421 km/883 mi)<br />
* Seattle, WA &#187; Pendleton, OR &#187; Craters of the Moon National Monument & Preserve &#187; Idaho Springs, ID &#187; Jackson, WY &#187; Grand Teton National Park &#187; Yellowstone National Park &#187; Missoula, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: September 2020 (2,746 km/1,706 mi)<br />
* Seattle, WA &#187; Coeur d'Alene, ID &#187; Missoula, MT &#187; Glacier National Park, MT &#187; Seattle, WA: July 2019 (1,984 km/1,233 mi)<br />
* Seattle, WA &#187; Corvallis, OR: November 2018 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2017 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2016 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2015 (round-trip)<br />
* Texas &#187; Oklahoma &#187; Kansas &#187; Nebraska &#187; South Dakota &#187; Wyoming &#187; Montana &#187; Idaho &#187; Seattle, WA: September 2015 (4,000 km/4,290 mi)<br />
* Seattle, WA &#187; Oregon &#187; Idaho &#187; Utah &#187; Wyoming &#187; Colorado &#187; Kansas &#187; Oklahoma &#187; Texas: 11-16 May 2013<br />
* Seattle, WA &#187; Port Angeles, WA &#187; Hurricane Ridge, WA: 28-Dec-2012 (round-trip)<br />
* Seattle, WA &#187; Portland, OR: 4-Dec-2012 (round-trip)<br />
* Chicago, IL &#187; Milwaukee, WI &#187; Minneapolis, MN &#187; Fargo, ND &#187; Billings, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: 25-26 June 2012 (3,357 km/2,086 mi)<br />
* St. Louis, MO &#187; Chicago, IL: 31-Dec-2011<br />
* Chicago, IL &#187; St. Louis, MO: 5-Jul-2011<br />
* Milwaukee, WI &#187; Chicago, IL: 30-Jun-2011<br />
* Pittsburgh, PA &#187; New York City, NY: April 2005 (round-trip)<br />
* Pittsburgh, PA &#187; Bethlehem, PA &#187; Westborough, MA &#187; New York City, NY: December 2004 (round-trip)<br />
* Pittsburgh, PA &#187; Boston, MA: November 2004 (round-trip)<br />
* Corvallis, OR &#187; Salt Lake City, UT &#187; Houston, TX &#187; Atlanta, GA &#187; Pittsburgh, PA: September 2004<br />
* Corvallis, OR &#187; Boston, MA: 2001, 2002 (round-trip)<br />
* Corvallis, OR &#187; Vancouver, BC, Canada (round-trip)<br />
* Corvallis, OR &#187; Tijuana, Mexico: 7-Sep-1999 (round-trip)<br />
* Los Angeles, CA &#187; Corvallis, OR: January 1998<br />
* Houston, TX &#187; Milwaukee, WI &#187; Menominee, MI: May 1995 (round-trip)<br />
<br />
== Bus / Train / Ferry ==<br />
===Spain trip (2006)===<br />
* Monaco &#187; Cannes &#187; Marseille &#187; Montpellier St-Ro &#187; Barcelona; April 2006 (round-trip)<br />
** 24-Apr-06 18h35: |&rarr; Nice, France [SNCF train]<br />
** 24-Apr-06 19h00: Antibes, FR<br />
** 24-Apr-06 19h07: Cannes, FR<br />
** 24-Apr-06 19h30: B. sur-Mer, FR<br />
** 24-Apr-06 19h39: San Raphael-Valescure, FR<br />
** 24-Apr-06 20h14: Les Arcs-Drag., FR<br />
** 24-Apr-06 20h56: Toulon, FR<br />
** 24-Apr-06 21h35: Marseille, FR<br />
** 25-Apr-06 15h05: |&rarr; Marseille, FR<br />
** 25-Apr-06 16h16: Nîmes, FR<br />
** 25-Apr-06 17h21: Montpellier St-Ro, FR<br />
** 25-Apr-06 18h42: Béziers, FR<br />
** 25-Apr-06 19h35: Perpignan, FR<br />
** 25-Apr-06 20h15: Portbou, Spain (ES) [''border'']<br />
** 25-Apr-06 22h30: Barcelona, ES<br />
** 27-Apr-06 19h24: |&rarr; Barcelona, ES [Renfe train]<br />
** 27-Apr-06 22h05: Cerbere, FR [''border'']<br />
** 28-Apr-06 08h37: Nice, FR<br />
** 28-Apr-06 10h00: Monaco<br />
<br />
===Miscellaneous (Europe)===<br />
* Tallinn, Estonia &rarr; Helsinki, Finland: January 2020 (round-trip)<br />
* Lisbon, Portugal &rarr; Porto, Portugal: Nov-2016 (round-trip)<br />
* København, DK &#187; Berlin, D: 09-Apr-2006 [+Ferry]<br />
* Berlin, D &#187; København, DK: 08-Apr-2006 (15h15) [+Ferry]<br />
* Ljubljana, Slovenia &#187; Villach HBF, Austria: 18-Aug-1997<br />
* Stockholm C &#187; Oslo S: 15-Aug-1997 (SJ train)<br />
* Salzburg, Austria &#187; Ljubljana, Slovenia: 25-Aug-1997 (&#214;sterreichische Bundesbahnen train (&#214;BB))<br />
* Haslev, DK &#187; Næstved, DK: 24-Aug-1997 (DSB train)<br />
* København &#187; Stockholm C: 14-Aug-1997 (DSB train)<br />
* Oslo S &#187; Bergen: 16-Aug-1997<br />
* Næstved, DK &#187; Rødby Færge, DK: 24-Aug-1997<br />
* Salzburg HBF &#187; Villach HBF (&uuml;ber Schwarzach-St. veit Bad Gastein): 25-Aug-1997 (&#214;BB train)<br />
* Oslo S &#187; Trondheim: 18-Aug-1997<br />
* Grensen (Scandinavia): 16-Aug-1997<br />
* Abisko Turiststation - STF: 20-Aug-1997<br />
* Abisko Turiststation - STF: 21-Aug-1997<br />
* Germany: 24-Aug-1997 (DB train)<br />
* Stockholm S:T Eriksgatan: 15-Aug-1997<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Jun-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Mar-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: (28-Nov-1997/30-Nov-1997) (round-trip)<br />
* Budapest, Hungary &rarr; Ljubljana, Slovenia: 8-Nov-1996<br />
* Budapest, Hungary &rarr; Slovakia: 18-Aug-1995 (round-trip)<br />
* Budapest, Hungary &rarr; Vienna, Austria: 9-Feb-1995 (round-trip)<br />
* Moscow, Russia &rarr; Warsaw, Poland: Sep-1994<br />
* Moscow, Russia &rarr; Brest, Belarus: Aug-1994 (round-trip)<br />
* Moscow, Russia &rarr; Minsk, Belarus: Jul-1994 (round-trip)<br />
* Warsaw, Poland &#187; Moscow, Russia: Jun-1994<br />
* Warsaw, Poland &rarr; Vilnius, Lithuania &rarr; Riga, Latvia: (12-Jan-1994/??-Jan-1994) (round-trip)<br />
<br />
===Miscellaneous (South America)===<br />
* Arequipa, Peru &rarr; Lima, Peru: 1992<br />
* Arequipa, Peru &rarr; Iquique, Chile: (17-Jul-1992/20-Jul-1992) (round-trip)<br />
* Lima, Peru &rarr; Arequipa, Peru: 1992<br />
* Lima, Peru &rarr; La Paz, Bolivia: (19-May-1991/6-Jun-1991) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (29-Nov-1990/11-Dec-1990) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (6-Jul-1990/20-Jul-1990) (round-trip)<br />
<br />
==Flights==<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): February 2023 [RT]<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2022 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): August 2022 [RT]<br />
* Kyiv, Ukraine (KBP) ✈ Frankfurt, Germany (FRA) ✈ Seattle, WA (SEA): December 2021<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ Frankfurt, Germany (FRA) ✈ Kyiv, Ukraine (KBP): December 2021<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2021 [RT]<br />
* Memphis, TN (MEM) ✈ Atlanta, GA (ATL) ✈ Seattle, WA (SEA): June 2021<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC) ✈ Memphis, TN (MEM): June 2021<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): May 2021 [RT]<br />
* Tallinn, Estonia (TLL) ✈ Stockholm, Sweden (ARN) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): January 2020<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ København, DK (CPH) ✈ Helsinki, Finland (HEL) ✈ Tallinn, Estonia (TLL): December 2019<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): October 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Miami, FL (MIA): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): August 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Denver, CO (DEN): May 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Charlotte, NC (CLT): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Santa Ana, CA (SNA): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): September 2018 [RT]<br />
* Budapest, Hungary (BUD) ✈ Brussels, Belgium (BRU) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): July 2018<br />
* Seattle, WA (SEA) ✈ Toronto, Canada (YYZ) ✈ Budapest, Hungary (BUD): June 2018<br />
* Seattle, WA (SEA) ✈ Reno, NV (RNO): May 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Reykjavík, Iceland (RKV): December 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Kona, Hawaii (KOA): September 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC): August 2017 [RT]<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): November 2016<br />
* Lisbon, Portugal ✈ Amsterdam, NL (AMS): November 2016<br />
* Paris, FR (CGD) ✈ Lisbon, Portugal: November 2016<br />
* Seattle, WA (SEA) ✈ Paris, FR (CDG): November 2016<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): November 2016 [RT]<br />
* Seattle, WA (SEA) ✈ Las Vegas, NV (LAS): June 2016 [RT]<br />
* Houston, TX (IAH) ✈ Seattle, WA (SEA): September 2015 [RT]<br />
* Houston, TX (IAH) ✈ San Francisco, CA (SFO): August 2015 [RT]<br />
* Houston, TX (IAH) ✈ Madison, WI (MSN): March 2015 [RT]<br />
* Houston, TX (IAH) ✈ Amsterdam, NL (AMS): March 2015 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee (MKE): June 2011<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX) ✈ Chicago, IL (ORD): October 2010 [RT]<br />
* Seattle, WA (SEA) ✈ Los Angeles, CA (LAX): December 2007 [RT]<br />
* København, DK (CPH) ✈ Seattle, WA (SEA): June 2006<br />
* Heathrow, UK ✈ København, DK (CPH): June 2006<br />
* Nice, FR ✈ Heathrow, UK: June 2006<br />
* København, DK (CPH) ✈ Nice, FR (NCE): February 2006<br />
* Washington Dulles ✈ København, DK: August 2005<br />
* Pittsburgh, PA (PIT) ✈ Washington Dulles: August 2005<br />
* Portland, OR (PDX) ✈ Pittsburgh, PA (PIT): Summer 2004 [RT]<br />
* Eugene, OR ✈ Houston, TX (IAH): February 2002 [RT]<br />
* Portland, OR (PDX) ✈ Boston, MA: December 2002 [RT]<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): January 2000<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): January 2000<br />
* Dublin, Ireland ✈ Amsterdam, NL (AMS): January 2000<br />
* Amsterdam (AMS) ✈ Dublin, Ireland: December 1999<br />
* Seattle, WA (SEA) ✈ Amsterdam, NL (AMS): December 1999<br />
* Portland, OR (PDX) ✈ Seattle, WA (SEA): December 1999<br />
* Chicago (ORD) ✈ Los Angeles (LAX): December 1997<br />
* Greenbay, WI (GRB) ✈ Chicago (ORD): December 1997<br />
* Chicago (ORD) ✈ Greenbay, WI (GRB): December 1997<br />
* Rome, Italy (FCO) ✈ Chicago, IL (ORD): December 1997<br />
* Trieste, Italy (TRS) ✈ Rome, Italy (FCO): December 1997<br />
* Houston, TX (IAH) ✈ Budapest, Hungary (BUD): July 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: June 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: March 1996 [RT]<br />
* Narita, Japan ✈ Taipei, Taiwan: December 1995 [RT]<br />
* Los Angeles, CA (LAX) ✈ Narita, Japan: October 1995<br />
* Houston, TX (IAH) ✈ Los Angeles (LAX): October 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): September 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): May 1995 [RT]<br />
* Paris, FR (CGD) ✈ Vienna, Austria: September 1993<br />
* Quito, Ecuador ✈ Caracas, Venezuela (CCS) ✈ Paris, France: 1993<br />
* Lima, Peru ✈ Tumbes, Peru: December 1992<br />
* Boston, MA ✈ Miami, FL ✈ Lima, Peru: <br />
* Amsterdam, NL (AMS) ✈ Chicago, IL (ORD): <br />
* Boston, MA ✈ Amsterdam, NL (AMS):<br />
<br />
== Individual Places ==<br />
=== Ireland ===<br />
* Dublin<br />
** '''Dublin''' (Baile &Ntilde;tha Cliath)<br />
* Kildare<br />
** Naas<br />
* Laois<br />
* Carlow<br />
** Carlow (Ceatharlach)<br />
** Royal Oak<br />
* Kilkenny<br />
** '''Kilkenny''' (Cill Chainnigh)<br />
** Callan<br />
* Tipperary<br />
** Glenbower<br />
** Clonmel (Cluian Meala)<br />
** Cahir<br />
** Burncourt<br />
* Cork<br />
** Fermoy<br />
** '''Cork''' (Coroaigh)<br />
** Fota<br />
** Cobh (An C&oacute;bh)<br />
** '''Blarney'''<br />
** Macroom<br />
** Ballyvourney<br />
* Kerr<br />
** ''Derrynasaggart Mts''<br />
** Poulgorm Br<br />
** '''Killarney''' (Cill Airne)<br />
** Farranfore<br />
* Limerick<br />
** Abbeyfeale<br />
** ''Mullaghareirk Mts''<br />
** Newcastle West<br />
** Croagh<br />
** '''Limerick''' (Luimneach)<br />
* Clare<br />
** Bunratty<br />
** Ennis (Inis)<br />
** Ennistymon<br />
** Liscannor<br />
** ''Cliffs of Moher''<br />
** Doolin<br />
** Lisdoonvarna<br />
** Ballyvaughan<br />
** Bealaclugga<br />
** Burren<br />
* Galway<br />
** Kinvarra<br />
** Ballinderreen<br />
** Oranmore<br />
** '''Galway''' (Gaillimh)<br />
** Claregalway<br />
** Tuam<br />
* Mayo<br />
** Claremorris<br />
** Cloonfallagh<br />
** Charlestown<br />
* Sligo<br />
** Curry<br />
** Tubbercurry<br />
** Collooney<br />
** '''Sligo''' (Sligeach)<br />
** ''Dartry Mts''<br />
* Leitrim<br />
* Donegal<br />
** Bundoran<br />
** Ballyshannon<br />
** Donegal (D&uacute;n na nGall)<br />
** Ballybofey<br />
** Clady<br />
* Tyrone<br />
** '''Strabane''' (Northern Ireland)<br />
* Londonderry<br />
** Derry (Londonderry)<br />
** Eglinton<br />
** Ballykelly<br />
** Limavady<br />
** Coleraine<br />
* Antrim<br />
** Derrykelghan<br />
** Moss-side<br />
** Ballycastle<br />
** ''Antrim Hills''<br />
** Ballintoy<br />
** ''Carrick-a-Rede Rope Bridge''<br />
** ''Giants Causeway''<br />
** Craignamaddy<br />
** Ballymoney<br />
** Ballymena<br />
** Antrim<br />
** ''Lough Neagh'' (lake)<br />
** Dunadry<br />
** Newtownabbey<br />
** '''Belfast'''<br />
* Down<br />
** Lisburn<br />
** Banbridge<br />
* Armagh<br />
** Newry<br />
* Louth<br />
** Dundalk (Dun Dealgan)<br />
** Dunleen<br />
** Drogheda (Droichead Atha)<br />
* Meath<br />
** Julianstown<br />
* Dublin<br />
** Balbriggan<br />
** Swords<br />
<br />
[[Category:World Travels]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Jq&diff=8261Jq2023-01-25T22:14:12Z<p>Christoph: /* See also */</p>
<hr />
<div>'''jq''' is a lightweight and flexible command-line JSON processor. jq is like [[sed]] for JSON data - you can use it to slice and filter and map and transform structured data with the same ease that sed, [[awk]], grep, and friends let you play with text.<br />
<br />
==Example usage==<br />
<br />
$ cat azones.json<br />
<pre><br />
{<br />
"availabilityZoneInfo": [<br />
{<br />
"hosts": {<br />
"node-1.example.com": {<br />
"nova-compute": {<br />
"active": true,<br />
"available": true<br />
}<br />
},<br />
"node-2.example.com": {<br />
"nova-compute": {<br />
"active": true,<br />
"available": true<br />
}<br />
}<br />
},<br />
"zoneName": "az1",<br />
"zoneState": {<br />
"available": true<br />
}<br />
},<br />
{<br />
"hosts": {<br />
"node-3.example.com": {<br />
"nova-compute": {<br />
"active": true,<br />
"available": true<br />
}<br />
},<br />
"node-4.example.com": {<br />
"nova-compute": {<br />
"active": true,<br />
"available": true<br />
}<br />
}<br />
},<br />
"zoneName": "az2",<br />
"zoneState": {<br />
"available": true<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<br />
* Capture just the availability zone names:<br />
$ cat azones.json | jq '[.availabilityZoneInfo[] | .zoneName]'<br />
<pre><br />
[<br />
"az1",<br />
"az2"<br />
]<br />
</pre><br />
<br />
Or, for compact instead of pretty-printed output:<br />
$ cat azones3.json | jq -c '[.availabilityZoneInfo[] | .zoneName]'<br />
["az1","az2"]<br />
<br />
* Capture just the hostname (e.g., "<code>node-1.example.com</code>") key for availability zone "az1":<br />
$ cat azones.json | jq '[.availabilityZoneInfo[] | select(.zoneName == "az1") | {hosts: .hosts|keys}]'<br />
<pre><br />
[<br />
{<br />
"hosts": [<br />
"node-1.example.com",<br />
"node-2.example.com"<br />
]<br />
}<br />
]<br />
</pre><br />
<br />
Or, for a more script-friendly output:<br />
$ cat azones.json | jq -cM '[.availabilityZoneInfo[] | select(.zoneName == "az1") | {hosts: .hosts|keys}]' | sed -e 's/["}\[]//g;s/\]//g;s/{hosts://g;s/,/ /g'<br />
#~OR~<br />
$ foo=($(cat azones3.json | jq -cM '[.availabilityZoneInfo[] | select(.zoneName == "az1") | {hosts: .hosts|keys}]' | sed -e 's/["}\[]//g;s/\]//g;s/{hosts://g;s/,/ /g'))<br />
$ echo ${foo[0]} #=> node-1.example.com<br />
<br />
* Get just the raw values:<br />
<br />
$ echo '{ "packet_loss": [ {"ips": "10.0.0.10 10.0.0.11 10.0.0.12", "node-17": "3/3" }] }' | jq -r '[.packet_loss[] | .ips] | .[]'<br />
10.0.0.10 10.0.0.11 10.0.0.12<br />
<br />
===Practical example===<br />
<br />
Here is how to print out all the [[:Category:OpenStack|OpenStack]] compute nodes in my example environment:<br />
<br />
<pre><br />
#!/bin/bash<br />
# AUTHOR: Christoph Champ <christoph.champ@gmail.com><br />
# Requires jq 1.5+<br />
JQ=$(which jq)<br />
<br />
OS_AUTH_URL=http://1.2.3.4:5000/v2.0/<br />
OS_TENANT_NAME=admin<br />
OS_USERNAME=admin<br />
OS_PASSWORD=admin<br />
<br />
INFO=$(curl -sXPOST "${OS_AUTH_URL}/tokens" \<br />
-H "Content-Type: application/json" \<br />
-d "{\"auth\":{\"tenantName\":\"$OS_TENANT_NAME\",\"passwordCredentials\":\<br />
{\"username\":\"$OS_USERNAME\",\"password\":\"$OS_PASSWORD\"}}}" | \<br />
${JQ} -crM '[.access.token.id + "," + (.access.serviceCatalog[] | select(.name == "nova") | .endpoints[].publicURL)] | .[]')<br />
<br />
TOKEN=${INFO%%,*}<br />
NOVA_ENDPOINT=${INFO#*,}<br />
<br />
IGNORE_ZONES="internal|nova"<br />
<br />
raw=$(curl -s -H "X-Auth-Token: ${TOKEN}" "${NOVA_ENDPOINT}/os-availability-zone/detail" | \<br />
${JQ} -crM '[.availabilityZoneInfo[].zoneName] | .[]' | \<br />
grep -vE "(${IGNORE_ZONES})" | tr '\n' ',')<br />
<br />
IFS=',' read -r -a zones <<< "${raw%,}"<br />
<br />
for zone in "${zones[@]}"; do<br />
raw=($(curl -s -H "X-Auth-Token: ${TOKEN}" "${NOVA_ENDPOINT}/os-availability-zone/detail" | \<br />
${JQ} --arg zone "$zone" '[.availabilityZoneInfo[] | select(.zoneName==$zone) | .hosts|keys] | .[]' | \<br />
tr -d '[]",' | sed '/^$/d' | tr '\n' ',' | tr -d ' '))<br />
<br />
IFS=',' read -r -a nodes <<< "${raw%,}"<br />
for node in "${nodes[@]}"; do<br />
echo "node: $zone $node"<br />
done<br />
done<br />
</pre><br />
<br />
Running the above script produces the following output:<br />
<pre><br />
node: az1 node-1.example.com<br />
node: az1 node-2.example.com<br />
node: az2 node-3.example.com<br />
node: az2 node-4.example.com<br />
</pre><br />
<br />
===Append to JSON===<br />
<br />
* Example of how to append key/values to an already existing JSON structure:<br />
<pre><br />
$ cat foo.json <br />
{ "name": "bob", "age": 30 }<br />
<br />
$ cat foo.json | jq 'to_entries'<br />
[<br />
{<br />
"key": "name",<br />
"value": "bob"<br />
},<br />
{<br />
"key": "age",<br />
"value": 30<br />
}<br />
]<br />
<br />
$ cat foo.json | BEARERTOKEN="Bearer abc123" jq 'to_entries | . + [{"key":"routes","value":[{"path":"api/v1","url":"http://example.com","headers":[{"name":"Authorization","content":env.BEARERTOKEN}]}]}] | from_entries'<br />
{<br />
"name": "bob",<br />
"age": 30,<br />
"routes": [<br />
{<br />
"path": "api/v1",<br />
"url": "http://example.com",<br />
"headers": [<br />
{<br />
"name": "Authorization",<br />
"content": "Bearer abc123"<br />
}<br />
]<br />
}<br />
]<br />
}<br />
</pre><br />
<br />
; Update a specific nested value in a JSON file<br />
<pre><br />
$ export NEW_URL="https://172.x.x.x:6443"<br />
$ jq --arg new_url "${NEW_URL}" '(.resources[] | select(.type == "rke_cluster") | .instances[].attributes.api_server_url) |= $new_url' foo.json<br />
{<br />
"resources": [<br />
{<br />
"module": "module.rancher",<br />
"type": "rke_cluster",<br />
"instances": [<br />
{<br />
"attributes": {<br />
"api_server_url": "https://172.x.x.x:6443",<br />
"foo": "bar"<br />
}<br />
}<br />
]<br />
}<br />
]<br />
}<br />
</pre><br />
<br />
==Miscellaneous==<br />
<br />
* Get the randomly generated [[Rancher]] admin password, as created by the <code>rancher2</code> [[Terraform]] provider:<br />
$ jq -crM '.resources[] | select(.provider == "module.rancher.provider.rancher2.bootstrap") | {instances: .instances[]|.attributes.current_password} | .[]' terraform.tfstate<br />
<br />
* <code>kubectl-neat</code>: Easily copy a [[Kubernetes]] certificate secret to another namespace:<br />
<pre><br />
$ SOURCE_NAMESPACE=<update-me><br />
$ DESTINATION_NAMESPACE=<update-me><br />
$ kubectl -n ${SOURCE_NAMESPACE} get secret kafka-client-credentials -o json |\<br />
kubectl neat |\<br />
jq 'del(.metadata["namespace"])' |\<br />
kubectl apply -n ${DESTINATION_NAMESPACE} -f -<br />
</pre><br />
<br />
==See also==<br />
* [https://github.com/mikefarah/yq yq] &mdash; a portable command-line YAML, JSON, XML, CSV and properties processor.<br />
* [https://github.com/simeji/jid jid] &mdash; a JSON incremental digger.<br />
* [https://github.com/jrockway/kubectl-jq kubectl-jq] &mdash; kubectl plugin that works like <code>kubectl get</code> but runs everything through a JQ program you provide<br />
<br />
==External links==<br />
*[https://stedolan.github.io/jq/ Official website]<br />
*[https://starkandwayne.com/blog/bash-for-loop-over-json-array-using-jq/ Bash For Look Over JSON Array Using Jq]<br />
*[https://jsonnet.org/ Jsonnet] &mdash; an extension of JSON<br />
<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Jq&diff=8260Jq2023-01-25T17:30:32Z<p>Christoph: /* External links */</p>
<hr />
<div>'''jq''' is a lightweight and flexible command-line JSON processor. jq is like [[sed]] for JSON data - you can use it to slice and filter and map and transform structured data with the same ease that sed, [[awk]], grep, and friends let you play with text.<br />
<br />
==Example usage==<br />
<br />
$ cat azones.json<br />
<pre><br />
{<br />
"availabilityZoneInfo": [<br />
{<br />
"hosts": {<br />
"node-1.example.com": {<br />
"nova-compute": {<br />
"active": true,<br />
"available": true<br />
}<br />
},<br />
"node-2.example.com": {<br />
"nova-compute": {<br />
"active": true,<br />
"available": true<br />
}<br />
}<br />
},<br />
"zoneName": "az1",<br />
"zoneState": {<br />
"available": true<br />
}<br />
},<br />
{<br />
"hosts": {<br />
"node-3.example.com": {<br />
"nova-compute": {<br />
"active": true,<br />
"available": true<br />
}<br />
},<br />
"node-4.example.com": {<br />
"nova-compute": {<br />
"active": true,<br />
"available": true<br />
}<br />
}<br />
},<br />
"zoneName": "az2",<br />
"zoneState": {<br />
"available": true<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<br />
* Capture just the availability zone names:<br />
$ cat azones.json | jq '[.availabilityZoneInfo[] | .zoneName]'<br />
<pre><br />
[<br />
"az1",<br />
"az2"<br />
]<br />
</pre><br />
<br />
Or, for compact instead of pretty-printed output:<br />
$ cat azones3.json | jq -c '[.availabilityZoneInfo[] | .zoneName]'<br />
["az1","az2"]<br />
<br />
* Capture just the hostname (e.g., "<code>node-1.example.com</code>") key for availability zone "az1":<br />
$ cat azones.json | jq '[.availabilityZoneInfo[] | select(.zoneName == "az1") | {hosts: .hosts|keys}]'<br />
<pre><br />
[<br />
{<br />
"hosts": [<br />
"node-1.example.com",<br />
"node-2.example.com"<br />
]<br />
}<br />
]<br />
</pre><br />
<br />
Or, for a more script-friendly output:<br />
$ cat azones.json | jq -cM '[.availabilityZoneInfo[] | select(.zoneName == "az1") | {hosts: .hosts|keys}]' | sed -e 's/["}\[]//g;s/\]//g;s/{hosts://g;s/,/ /g'<br />
#~OR~<br />
$ foo=($(cat azones3.json | jq -cM '[.availabilityZoneInfo[] | select(.zoneName == "az1") | {hosts: .hosts|keys}]' | sed -e 's/["}\[]//g;s/\]//g;s/{hosts://g;s/,/ /g'))<br />
$ echo ${foo[0]} #=> node-1.example.com<br />
<br />
* Get just the raw values:<br />
<br />
$ echo '{ "packet_loss": [ {"ips": "10.0.0.10 10.0.0.11 10.0.0.12", "node-17": "3/3" }] }' | jq -r '[.packet_loss[] | .ips] | .[]'<br />
10.0.0.10 10.0.0.11 10.0.0.12<br />
<br />
===Practical example===<br />
<br />
Here is how to print out all the [[:Category:OpenStack|OpenStack]] compute nodes in my example environment:<br />
<br />
<pre><br />
#!/bin/bash<br />
# AUTHOR: Christoph Champ <christoph.champ@gmail.com><br />
# Requires jq 1.5+<br />
JQ=$(which jq)<br />
<br />
OS_AUTH_URL=http://1.2.3.4:5000/v2.0/<br />
OS_TENANT_NAME=admin<br />
OS_USERNAME=admin<br />
OS_PASSWORD=admin<br />
<br />
INFO=$(curl -sXPOST "${OS_AUTH_URL}/tokens" \<br />
-H "Content-Type: application/json" \<br />
-d "{\"auth\":{\"tenantName\":\"$OS_TENANT_NAME\",\"passwordCredentials\":\<br />
{\"username\":\"$OS_USERNAME\",\"password\":\"$OS_PASSWORD\"}}}" | \<br />
${JQ} -crM '[.access.token.id + "," + (.access.serviceCatalog[] | select(.name == "nova") | .endpoints[].publicURL)] | .[]')<br />
<br />
TOKEN=${INFO%%,*}<br />
NOVA_ENDPOINT=${INFO#*,}<br />
<br />
IGNORE_ZONES="internal|nova"<br />
<br />
raw=$(curl -s -H "X-Auth-Token: ${TOKEN}" "${NOVA_ENDPOINT}/os-availability-zone/detail" | \<br />
${JQ} -crM '[.availabilityZoneInfo[].zoneName] | .[]' | \<br />
grep -vE "(${IGNORE_ZONES})" | tr '\n' ',')<br />
<br />
IFS=',' read -r -a zones <<< "${raw%,}"<br />
<br />
for zone in "${zones[@]}"; do<br />
raw=($(curl -s -H "X-Auth-Token: ${TOKEN}" "${NOVA_ENDPOINT}/os-availability-zone/detail" | \<br />
${JQ} --arg zone "$zone" '[.availabilityZoneInfo[] | select(.zoneName==$zone) | .hosts|keys] | .[]' | \<br />
tr -d '[]",' | sed '/^$/d' | tr '\n' ',' | tr -d ' '))<br />
<br />
IFS=',' read -r -a nodes <<< "${raw%,}"<br />
for node in "${nodes[@]}"; do<br />
echo "node: $zone $node"<br />
done<br />
done<br />
</pre><br />
<br />
Running the above script produces the following output:<br />
<pre><br />
node: az1 node-1.example.com<br />
node: az1 node-2.example.com<br />
node: az2 node-3.example.com<br />
node: az2 node-4.example.com<br />
</pre><br />
<br />
===Append to JSON===<br />
<br />
* Example of how to append key/values to an already existing JSON structure:<br />
<pre><br />
$ cat foo.json <br />
{ "name": "bob", "age": 30 }<br />
<br />
$ cat foo.json | jq 'to_entries'<br />
[<br />
{<br />
"key": "name",<br />
"value": "bob"<br />
},<br />
{<br />
"key": "age",<br />
"value": 30<br />
}<br />
]<br />
<br />
$ cat foo.json | BEARERTOKEN="Bearer abc123" jq 'to_entries | . + [{"key":"routes","value":[{"path":"api/v1","url":"http://example.com","headers":[{"name":"Authorization","content":env.BEARERTOKEN}]}]}] | from_entries'<br />
{<br />
"name": "bob",<br />
"age": 30,<br />
"routes": [<br />
{<br />
"path": "api/v1",<br />
"url": "http://example.com",<br />
"headers": [<br />
{<br />
"name": "Authorization",<br />
"content": "Bearer abc123"<br />
}<br />
]<br />
}<br />
]<br />
}<br />
</pre><br />
<br />
; Update a specific nested value in a JSON file<br />
<pre><br />
$ export NEW_URL="https://172.x.x.x:6443"<br />
$ jq --arg new_url "${NEW_URL}" '(.resources[] | select(.type == "rke_cluster") | .instances[].attributes.api_server_url) |= $new_url' foo.json<br />
{<br />
"resources": [<br />
{<br />
"module": "module.rancher",<br />
"type": "rke_cluster",<br />
"instances": [<br />
{<br />
"attributes": {<br />
"api_server_url": "https://172.x.x.x:6443",<br />
"foo": "bar"<br />
}<br />
}<br />
]<br />
}<br />
]<br />
}<br />
</pre><br />
<br />
==Miscellaneous==<br />
<br />
* Get the randomly generated [[Rancher]] admin password, as created by the <code>rancher2</code> [[Terraform]] provider:<br />
$ jq -crM '.resources[] | select(.provider == "module.rancher.provider.rancher2.bootstrap") | {instances: .instances[]|.attributes.current_password} | .[]' terraform.tfstate<br />
<br />
* <code>kubectl-neat</code>: Easily copy a [[Kubernetes]] certificate secret to another namespace:<br />
<pre><br />
$ SOURCE_NAMESPACE=<update-me><br />
$ DESTINATION_NAMESPACE=<update-me><br />
$ kubectl -n ${SOURCE_NAMESPACE} get secret kafka-client-credentials -o json |\<br />
kubectl neat |\<br />
jq 'del(.metadata["namespace"])' |\<br />
kubectl apply -n ${DESTINATION_NAMESPACE} -f -<br />
</pre><br />
<br />
==See also==<br />
* [https://github.com/mikefarah/yq yq] &mdash; a portable command-line YAML, JSON, XML, CSV and properties processor.<br />
* [https://github.com/simeji/jid jid] &mdash; a JSON incremental digger.<br />
<br />
==External links==<br />
*[https://stedolan.github.io/jq/ Official website]<br />
*[https://starkandwayne.com/blog/bash-for-loop-over-json-array-using-jq/ Bash For Look Over JSON Array Using Jq]<br />
*[https://jsonnet.org/ Jsonnet] &mdash; an extension of JSON<br />
<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Kubernetes&diff=8259Kubernetes2023-01-25T17:17:08Z<p>Christoph: /* Miscellaneous examples */</p>
<hr />
<div>'''Kubernetes''' (also known by its numeronym '''k8s''') is an open source container cluster manager. Kubernetes' primary goal is to provide a platform for automating deployment, scaling, and operations of application containers across a cluster of hosts. Kubernetes was released by Google on July 2015.<br />
<br />
* Get the latest stable release of k8s with:<br />
$ curl -sSL <nowiki>https://dl.k8s.io/release/stable.txt</nowiki><br />
<br />
==Release history==<br />
<br />
NOTE: There is no such thing as Kubernetes Long-Term-Support (LTS). There is a new "minor" release ''roughly'' every 3 months (note: changed to ''roughly'' every 4 months in 2020).<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="3" bgcolor="#EFEFEF" | '''Kubernetes release history'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Release<br />
!Date<br />
!Cadence (days)<br />
|- align="left"<br />
|1.0 || 2015-07-10 ||align="right"|<br />
|--bgcolor="#eeeeee"<br />
|1.1 || 2015-11-09 ||align="right"| 122<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.2.md 1.2] || 2016-03-16 ||align="right"| 128<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.3.md 1.3] || 2016-07-01 ||align="right"| 107<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.4.md 1.4] || 2016-09-26 ||align="right"| 87<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.5.md 1.5] || 2016-12-12 ||align="right"| 77<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.6.md 1.6] || 2017-03-28 ||align="right"| 106<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.7.md 1.7] || 2017-06-30 ||align="right"| 94<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.8.md 1.8] || 2017-09-28 ||align="right"| 90<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.9.md 1.9] || 2017-12-15 ||align="right"| 78<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md 1.10] || 2018-03-26 ||align="right"| 101<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md 1.11] || 2018-06-27 ||align="right"| 93<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.12.md 1.12] || 2018-09-27 ||align="right"| 92<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.13.md 1.13] || 2018-12-03 ||align="right"| 67<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.14.md 1.14] || 2019-03-25 ||align="right"| 112<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md 1.15] || 2019-06-17 ||align="right"| 84<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md 1.16] || 2019-09-18 ||align="right"| 93<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md 1.17] || 2019-12-09 ||align="right"| 82<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md 1.18] || 2020-03-25 ||align="right"| 107<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md 1.19] || 2020-08-26 ||align="right"| 154<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md 1.20] || 2020-12-08 ||align="right"| 104<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md 1.21] || 2021-04-08 ||align="right"| 121<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md 1.22] || 2021-08-04 ||align="right"| 118<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md 1.23] || 2021-12-07 ||align="right"| 125<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md 1.24] || 2022-05-03 ||align="right"| 147<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md 1.25] || 2022-08-23 ||align="right"| 112<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md 1.26] || 2023-01-18 ||align="right"| 148<br />
|}<br />
</div><br />
<br clear="all"/><br />
See: [https://gravitational.com/blog/kubernetes-release-cycle The full-time job of keeping up with Kubernetes]<br />
<br />
==Providers and installers==<br />
<br />
* Vanilla Kubernetes<br />
* AWS:<br />
** Managed: EKS<br />
** Kops<br />
** Kube-AWS<br />
** Kismatic<br />
** Kubicorn<br />
** Stack Point Cloud<br />
* Google:<br />
** Managed: GKE<br />
** [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
** Stack Point Cloud<br />
** Typhoon<br />
* Azure AKS<br />
* Ubuntu UKS<br />
* VMware PKS<br />
* [[Rancher|Rancher RKE]]<br />
* CoreOS Tectonic<br />
<br />
==Design overview==<br />
Kubernetes is built through the definition of a set of components (building blocks or "primitives") which, when used collectively, provide a method for the deployment, maintenance, and scalability of container-based application clusters.<br />
<br />
These "primitives" are designed to be ''loosely coupled'' (i.e., where little to no knowledge of the other component definitions is needed to use) as well as easily extensible through an API. Both the internal components of Kubernetes as well as the extensions and containers make use of this API.<br />
<br />
==Components==<br />
The building blocks of Kubernetes are the following (note that these are also referred to as Kubernetes "Objects" or "API Primitives"):<br />
<br />
;Cluster : A cluster is a set of machines (physical or virtual) on which your applications are managed and run. All machines are managed as a cluster (or set of clusters, depending on the topology used).<br />
;Nodes (minions) : You can think of these as "container clients". These are the individual hosts (physical or virtual) that Docker is installed on and hosts the various containers within your managed cluster.<br />
: Each node will run etcd (a key-pair management and communication service, used by Kubernetes for exchanging messages and reporting on cluster status) as well as the Kubernetes Proxy.<br />
;Pods : A pod consists of one or more containers. Those containers are guaranteed (by the cluster controller) to be located on the same host machine (aka "co-located") in order to facilitate sharing of resources. For an example, it makes sense to have database processes and data containers as close as possible. In fact, they really should be in the same pod.<br />
: Pods "work together", as in a multi-tiered application configuration. Each set of pods that define and implement a service (e.g., MySQL or Apache) are defined by the label selector (see below).<br />
: Pods are assigned unique IPs within each cluster. These allow an application to use ports without having to worry about conflicting port utilization.<br />
: Pods can contain definitions of disk volumes or shares, and then provide access from those to all the members (containers) within the pod.<br />
: Finally, pod management is done through the API or delegated to a controller.<br />
;Labels : Clients can attach key-value pairs to any object in the system (e.g., Pods or Nodes). These become the labels that identify them in the configuration and management of them. The key-value pairs can be used to filter, organize, and perform mass operations on a set of resources.<br />
;Selectors : Label Selectors represent queries that are made against those labels. They resolve to the corresponding matching objects. A Selector expression matches labels to filter certain resources. For example, you may want to search for all pods that belong to a certain service, or find all containers that have a specific tier Label value as "database". Labels and Selectors are inherently two sides of the same coin. You can use Labels to classify resources and use Selectors to find them and use them for certain actions.<br />
: These two items are the primary way that grouping is done in Kubernetes and determine which components that a given operation applies to when indicated.<br />
;Controllers : These are used in the management of your cluster. Controllers are the mechanism by which your desired configuration state is enforced.<br />
: Controllers manage a set of pods and, depending on the desired configuration state, may engage other controllers to handle replication and scaling (Replication Controller) of X number of containers and pods across the cluster. It is also responsible for replacing any container in a pod that fails (based on the desired state of the cluster).<br />
: Replication Controllers (RC) are a subset of Controllers and are an abstraction used to manage pod lifecycles. One of the key uses of RCs is to maintain a certain number of running Pods (e.g., for scaling or ensuring that at least one Pod is running at all times, etc.). It is considered a "best practice" to use RCs to define Pod lifecycles, rather than creating Pods directly.<br />
: Other controllers that can be engaged include a ''DaemonSet Controller'' (enforces a 1-to-1 ratio of pods to Worker Nodes) and a ''Job Controller'' (that runs pods to "completion", such as in batch jobs).<br />
: Each set of pods any controller manages, is determined by the label selectors that are part of its definition.<br />
;Replica Sets: These define how many replicas of each Pod will be running. They also monitor and ensure the required number of Pods are running, replacing Pods that die. Replica Sets can act as replacements for Replication Controllers.<br />
;Services : A Service is an abstraction on top of Pods, which provides a single IP address and DNS name by which the Pods can be accessed. This load balancing configuration is much easier to manage and helps scale Pods seamlessly.<br />
: Kubernetes can then provide service discovery and handle routing with the static IP for each pod as well as load balancing (round-robin based) connections to that service among the pods that match the label selector indicated.<br />
: By default, although a service is only exposed inside a cluster, it can also be exposed outside a cluster, as needed.<br />
;Volumes : A Volume is a directory with data, which is accessible to a container. The volume co-terminates with the Pods that encloses it.<br />
;Name : A name by which a resource is identified.<br />
;Namespace : A Namespace provides additional qualification to a resource name. This is especially helpful when multiple teams/projects are using the same cluster and there is a potential for name collision. You can think of a Namespace as a virtual wall between multiple clusters.<br />
;Annotations : An Annotation is a Label, but with much larger data capacity. Typically, this data is not readable by humans and is not easy to filter through. Annotation is useful only for storing data that may not be searched, but is required by the resource (e.g., storing strong keys, etc.).<br />
;Control Pane<br />
;API<br />
<br />
===Pods===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/ Pod]'' is the smallest and simplest Kubernetes object. It is the unit of deployment in Kubernetes, which represents a single instance of the application. A Pod is a logical collection of one or more containers, which:<br />
<br />
* are scheduled together on the same host;<br />
* share the same network namespace; and<br />
* mount the same external storage (Volumes).<br />
<br />
Pods are ephemeral in nature, and they do not have the capability to self-heal by themselves. That is why we use them with controllers, which can handle a Pod's replication, fault tolerance, self-heal, etc. Examples of controllers are ''Deployments'', ''ReplicaSets'', ''ReplicationControllers'', etc. We attach the Pod's specification to other objects using Pod Templates (see below).<br />
<br />
===Labels===<br />
Labels are key-value pairs that can be attached to any Kubernetes object (e.g. ''Pods''). Labels are used to organize and select a subset of objects, based on the requirements in place. Many objects can have the same label(s). Labels do not provide uniqueness to objects. <br />
<br />
===Label Selectors===<br />
With Label Selectors, we can select a subset of objects. Kubernetes supports two types of Selectors:<br />
<br />
;Equality-Based Selectors : Equality-Based Selectors allow filtering of objects based on label keys and values. With this type of Selector, we can use the <code>=</code>, <code>==</code>, or <code>!=</code> operators. For example, with <code>env==dev</code>, we are selecting the objects where the "<code>env</code>" label is set to "<code>dev</code>".<br />
;Set-Based Selectors : Set-Based Selectors allow filtering of objects based on a set of values. With this type of Selector, we can use the <code>in</code>, <code>notin</code>, and <code>exist</code> operators. For example, with <code>env in (dev,qa)</code>, we are selecting objects where the "<code>env</code>" label is set to "<code>dev</code>" or "<code>qa</code>".<br />
<br />
===Replication Controllers===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/ ReplicationController]'' (rc) is a controller that is part of the Master Node's Controller Manager. It makes sure the specified number of replicas for a Pod is running at any given point in time. If there are more Pods than the desired count, the ReplicationController would kill the extra Pods, and, if there are less Pods, then the ReplicationController would create more Pods to match the desired count. Generally, we do not deploy a Pod independently, as it would not be able to re-start itself if something goes wrong. We always use controllers like ReplicationController to create and manage Pods.<br />
<br />
===Replica Sets===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ ReplicaSet]'' (rs) is the next-generation ReplicationController. ReplicaSets support both equality- and set-based Selectors, whereas ReplicationControllers only support equality-based Selectors. As of January 2018, this is the only difference.<br />
<br />
As an example, say you create a ReplicaSet where you defined a "desired replicas = 3" (and set "<code>current==desired</code>"), any time "<code>current!=desired</code>" (i.e., one of the Pods dies) the ReplicaSet will detect that the current state is no longer matching the desired state. So, in our given scenario, the ReplicaSet will create one more Pod, thus ensuring that the current state matches the desired state.<br />
<br />
ReplicaSets can be used independently, but they are mostly used by Deployments to orchestrate the Pod creation, deletion, and updates. A Deployment automatically creates the ReplicaSets, and we do not have to worry about managing them.<br />
<br />
===Deployments===<br />
''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' objects provide declarative updates to Pods and ReplicaSets. The DeploymentController is part of the Master Node's Controller Manager, and it makes sure that the current state always matches the desired state.<br />
<br />
As an example, let's say we have a Deployment which creates a "ReplicaSet A". ReplicaSet A then creates 3 Pods. In each Pod, one of the containers uses the <code>nginx:1.7.9</code> image.<br />
<br />
Now, in the Deployment, we change the Pod's template and we update the image for the Nginx container from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>. As we have modified the Pod's template, a new "ReplicaSet B" gets created. This process is referred to as a "Deployment rollout". (A rollout is only triggered when we update the Pod's template for a deployment. Operations like scaling the deployment do not trigger the deployment.) Once ReplicaSet B is ready, the Deployment starts pointing to it.<br />
<br />
On top of ReplicaSets, Deployments provide features like Deployment recording, with which, if something goes wrong, we can rollback to a previously known state.<br />
<br />
===Namespaces===<br />
If we have numerous users whom we would like to organize into teams/projects, we can partition the Kubernetes cluster into sub-clusters using ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces]''. The names of the resources/objects created inside a Namespace are unique, but not across Namespaces.<br />
<br />
To list all the Namespaces, we can run the following command:<br />
$ kubectl get namespaces<br />
NAME STATUS AGE<br />
default Active 2h<br />
kube-public Active 2h<br />
kube-system Active 2h<br />
<br />
Generally, Kubernetes creates two default namespaces: <code>kube-system</code> and <code>default</code>. The <code>kube-system</code> namespace contains the objects created by the Kubernetes system. The <code>default</code> namespace contains the objects which belong to any other namespace. By default, we connect to the <code>default</code> Namespace. <code>kube-public</code> is a special namespace, which is readable by all users and used for special purposes, like bootstrapping a cluster. <br />
<br />
Using ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ Resource Quotas]'', we can divide the cluster resources within Namespaces.<br />
<br />
===Component services===<br />
The component services running on a standard master/worker node(s) Kubernetes setup are as follows:<br />
* Kubernetes Master node(s)<br />
*; kube-apiserver : Exposes Kubernetes APIs<br />
*; kube-controller-manager : Runs controllers to handle nodes, endpoints, etc.<br />
*; kube-scheduler : Watches for new pods and assigns them nodes<br />
*; etcd : Distributed key-value store<br />
*; DNS : [optional] DNS for Kubernetes services<br />
* Worker node(s)<br />
*; kubelet : Manages pods on a node, volumes, secrets, creating new containers, health checks, etc.<br />
*; kube-proxy : Maintains network rules, port forwarding, etc.<br />
<br />
==Setup a Kubernetes cluster==<br />
<br />
<div style="margin: 10px; padding: 5px; border: 2px solid red;">'''IMPORTANT''': The following is how to setup Kubernetes 1.2 that is, as of January 2018, a very old version. I will update this article with how to setup k8s using a much newer version (v1.9) when I have time.<br />
</div><br />
<br />
In this section, I will show you how to setup a Kubernetes cluster with etcd and Docker. The cluster will consist of 1 master node and 3 worker nodes.<br />
<br />
===Setup VMs===<br />
<br />
For this demo, I will be creating 4 VMs via [[Vagrant]] (with VirtualBox).<br />
<br />
* Create Vagrant demo environment:<br />
$ mkdir $HOME/dev/kubernetes && cd $_<br />
<br />
* Create Vagrantfile with the following contents:<br />
<pre><br />
# -*- mode: ruby -*-<br />
# vi: set ft=ruby :<br />
<br />
require 'yaml'<br />
VAGRANTFILE_API_VERSION = "2"<br />
<br />
$common_script = <<COMMON_SCRIPT<br />
# Set verbose<br />
set -v<br />
# Set exit on error<br />
set -e<br />
echo -e "$(date) [INFO] Starting modified Vagrant..."<br />
sudo yum update -y<br />
# Timestamp provision<br />
date > /etc/vagrant_provisioned_at<br />
COMMON_SCRIPT<br />
<br />
unless defined? CONFIG<br />
configuration_file = File.join(File.dirname(__FILE__), 'vagrant_config.yml')<br />
CONFIG = YAML.load(File.open(configuration_file, File::RDONLY).read)<br />
end<br />
<br />
CONFIG['box'] = {} unless CONFIG.key?('box')<br />
<br />
def modifyvm_network(node)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--nicpromisc1", "allow-all"]<br />
#vbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]<br />
vbox.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]<br />
end<br />
end<br />
<br />
def modifyvm_resources(node, memory, cpus)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--memory", memory]<br />
vbox.customize ["modifyvm", :id, "--cpus", cpus]<br />
end<br />
end<br />
<br />
## START: Actual Vagrant process<br />
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|<br />
<br />
config.vm.box = CONFIG['box']['name']<br />
<br />
# Uncomment the following line if you wish to be able to pass files from<br />
# your local filesystem directly into the vagrant VM:<br />
#config.vm.synced_folder "data", "/vagrant"<br />
<br />
## VM: k8s master #############################################################<br />
config.vm.define "master" do |node|<br />
node.vm.hostname = "k8s.master.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
#node.vm.network "forwarded_port", guest: 80, host: 8080<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['master']<br />
<br />
# Uncomment the following if you wish to define CPU/memory:<br />
#node.vm.provider "virtualbox" do |vbox|<br />
# vbox.customize ["modifyvm", :id, "--memory", "4096"]<br />
# vbox.customize ["modifyvm", :id, "--cpus", "2"]<br />
#end<br />
#modifyvm_resources(node, "4096", "2")<br />
end<br />
## VM: k8s minion1 ############################################################<br />
config.vm.define "minion1" do |node|<br />
node.vm.hostname = "k8s.minion1.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion1']<br />
end<br />
## VM: k8s minion2 ############################################################<br />
config.vm.define "minion2" do |node|<br />
node.vm.hostname = "k8s.minion2.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion2']<br />
end<br />
## VM: k8s minion3 ############################################################<br />
config.vm.define "minion3" do |node|<br />
node.vm.hostname = "k8s.minion3.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion3']<br />
end<br />
###############################################################################<br />
<br />
end<br />
</pre><br />
<br />
The above Vagrantfile uses the following configuration file:<br />
$ cat vagrant_config.yml<br />
<pre><br />
---<br />
box:<br />
name: centos/7<br />
storage_controller: 'SATA Controller'<br />
debug: false<br />
development: false<br />
network:<br />
dns1: 8.8.8.8<br />
dns2: 8.8.4.4<br />
internal:<br />
network: 192.168.200.0/24<br />
external:<br />
start: 192.168.100.100<br />
end: 192.168.100.200<br />
network: 192.168.100.0/24<br />
bridge: wlan0<br />
netmask: 255.255.255.0<br />
broadcast: 192.168.100.255<br />
host_groups:<br />
master: 192.168.200.100<br />
minion1: 192.168.200.101<br />
minion2: 192.168.200.102<br />
minion3: 192.168.200.103<br />
</pre><br />
<br />
* In the Vagrant Kubernetes directory (i.e., <code>$HOME/dev/kubernetes</code>), run the following command:<br />
$ vagrant up<br />
<br />
===Setup hosts===<br />
''Note: Run the following commands/steps on all hosts (master and minions).''<br />
<br />
* Log into the k8s master host:<br />
$ vagrant ssh master<br />
<br />
* Kubernetes cluster<br />
$ cat << EOF >> /etc/hosts<br />
192.168.200.100 k8s.master.dev<br />
192.168.200.101 k8s.minion1.dev<br />
192.168.200.102 k8s.minion2.dev<br />
192.168.200.103 k8s.minion3.dev<br />
EOF<br />
<br />
* Install, enable, and start NTP:<br />
$ yum install -y ntp<br />
$ systemctl enable ntpd && systemctl start ntpd<br />
$ timedatectl<br />
<br />
* Disable any [[iptables|firewall rules]] (for now; we will add the rules back later):<br />
$ systemctl stop firewalld && systemctl disable firewalld<br />
$ systemctl stop iptables<br />
<br />
* Disable [[SELinux]] (for now; we will turn it on again later):<br />
$ setenforce 0<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/sysconfig/selinux<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
$ sestatus<br />
<br />
* Add the Docker repo and update yum:<br />
$ cat << EOF > /etc/yum.repos.d/virt7-docker-common-release.repo<br />
[virt7-docker-common-release]<br />
name=virr7-docker-common-release<br />
baseurl=<nowiki>http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/</nowiki><br />
gpgcheck=0<br />
EOF<br />
$ yum update<br />
<br />
* Install Docker, Kubernetes, and etcd:<br />
$ yum install -y --enablerepo=virt7-docker-common-release kubernetes docker etcd<br />
<br />
===Install and configure master controller===<br />
''Note: Run the following commands on only the master host.''<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/etcd/etcd.conf</code> and add (or make changes to) the following lines:<br />
[member]<br />
ETCD_LISTEN_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
[cluster]<br />
ETCD_ADVERTISE_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/apiserver</code> and add (or make changes to) the following lines:<br />
<pre><br />
# The address on the local server to listen to.<br />
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"<br />
KUBE_API_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port on the local server to listen on.<br />
KUBE_API_PORT="--port=8080"<br />
<br />
# Port minions listen on<br />
KUBELET_PORT="--kubelet-port=10250"<br />
<br />
# Comma separated list of nodes in the etcd cluster<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://127.0.0.1:2379</nowiki>"<br />
<br />
# Address range to use for services<br />
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"<br />
<br />
# default admission control policies<br />
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"<br />
<br />
# Add your own!<br />
KUBE_API_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following etcd and Kubernetes services:<br />
<br />
$ for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE <br />
done<br />
<br />
* Check on the status of the above services (the following command should report 4 running services):<br />
$ systemctl status etcd kube-apiserver kube-controller-manager kube-scheduler | grep "(running)" | wc -l # => 4<br />
<br />
* Check on the status of the Kubernetes API server:<br />
$ kubectl cluster-info<br />
Kubernetes master is running at <nowiki>http://localhost:8080</nowiki><br />
$ curl <nowiki>http://localhost:8080/version</nowiki><br />
#~OR~<br />
$ curl <nowiki>http://k8s.master.dev:8080/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "2",<br />
"gitVersion": "v1.2.0",<br />
"gitCommit": "ec7364b6e3b155e78086018aa644057edbe196e5",<br />
"gitTreeState": "clean"<br />
}<br />
</pre><br />
<br />
* Get a list of Kubernetes API paths:<br />
$ curl <nowiki>http://k8s.master.dev:8080/paths</nowiki><br />
<pre><br />
{<br />
"paths": [<br />
"/api",<br />
"/api/v1",<br />
"/apis",<br />
"/apis/autoscaling",<br />
"/apis/autoscaling/v1",<br />
"/apis/batch",<br />
"/apis/batch/v1",<br />
"/apis/extensions",<br />
"/apis/extensions/v1beta1",<br />
"/healthz",<br />
"/healthz/ping",<br />
"/logs/",<br />
"/metrics",<br />
"/resetMetrics",<br />
"/swagger-ui/",<br />
"/swaggerapi/",<br />
"/ui/",<br />
"/version"<br />
]<br />
}<br />
</pre><br />
<br />
* List all available paths (key-value stores) known to ectd:<br />
$ etcdctl ls / --recursive<br />
<br />
The master controller in a Kubernetes cluster must have the following services running to function as the master host in the cluster:<br />
* ntpd<br />
* etcd<br />
* kube-controller-manager<br />
* kube-apiserver<br />
* kube-scheduler<br />
<br />
Note: The Docker daemon should not be running on the master host.<br />
<br />
===Install and configure the minions===<br />
''Note: Run the following commands/steps on all minion hosts.''<br />
<br />
* Log into the k8s minion hosts:<br />
$ vagrant ssh minion1 # do the same for minion2 and minion3<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ECTD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/kubelet</code> and add (or make changes to) the following lines:<br />
<pre><br />
###<br />
# kubernetes kubelet (minion) config<br />
<br />
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)<br />
KUBELET_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port for the info server to serve on<br />
KUBELET_PORT="--port=10250"<br />
<br />
# You may leave this blank to use the actual hostname<br />
KUBELET_HOSTNAME="--hostname-override=k8s.minion1.dev" # ***CHANGE TO CORRECT MINION HOSTNAME***<br />
<br />
# location of the api-server<br />
KUBELET_API_SERVER="--api-servers=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
<br />
# pod infrastructure container<br />
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"<br />
<br />
# Add your own!<br />
KUBELET_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following services:<br />
$ for SERVICE in kube-proxy kubelet docker; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE<br />
done<br />
<br />
* Test that Docker is running and can start containers:<br />
$ docker info<br />
$ docker pull hello-world<br />
$ docker run hello-world<br />
<br />
Each minion in a Kubernetes cluster must have the following services running to function as a member of the cluster (i.e., a "Ready" node):<br />
* ntpd<br />
* kubelet<br />
* kube-proxy<br />
* docker<br />
<br />
===Kubectl: Exploring our environment===<br />
''Note: Run all of the following commands on the master host.''<br />
<br />
* Get a list of nodes with <code>kubectl</code>:<br />
$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 20m<br />
k8s.minion2.dev Ready 12m<br />
k8s.minion3.dev Ready 12m<br />
</pre><br />
<br />
* Describe nodes with <code>kubectl</code>:<br />
<br />
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'<br />
$ kubectl get nodes -o jsonpath='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' | tr ';' "\n"<br />
<pre><br />
k8s.minion1.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion2.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion3.dev:OutOfDisk=False<br />
Ready=True<br />
</pre><br />
<br />
* Get the man page for <code>kubectl</code>:<br />
$ man kubectl-get<br />
<br />
==Working with our Kubernetes cluster==<br />
<br />
''Note: The following section will be working from within the Kubernetes cluster we created above.''<br />
<br />
===Create and deploy pod definitions===<br />
<br />
* Turn off nodes 1 and 2:<br />
minion{1,2}$ systemctl stop kubelet kube-proxy<br />
<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 1h<br />
k8s.minion2.dev NotReady 37m<br />
k8s.minion3.dev NotReady 39m<br />
</pre><br />
<br />
* Check for any k8s Pods (there should be none):<br />
master$ kubectl get pods<br />
<br />
* Create a builds directory for our Pods:<br />
master$ mkdir builds && cd $_<br />
<br />
* Create a Pod running Nginx inside a Docker container:<br />
<pre><br />
master$ kubectl create -f - <<EOF<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
* Check on Pod creation status:<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 0/1 ContainerCreating 0 2s<br />
</pre><br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 1/1 Running 0 3m<br />
</pre><br />
<br />
minion1$ docker ps<br />
<pre><br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
a718c6c0355d nginx:1.7.9 "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx.4580025_nginx_default_699e...<br />
</pre><br />
<br />
master$ kubectl describe pod nginx<br />
<br />
master$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1<br />
busybox$ wget -qO- 172.17.0.2<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete pod nginx<br />
<br />
* Port forwarding:<br />
master$ kubectl create -f nginx.yml # see above for YAML<br />
master$ kubectl port-forward nginx :80 &<br />
I1020 23:12:29.478742 23394 portforward.go:213] Forwarding from [::1]:40065 -> 80<br />
master$ curl -I localhost:40065<br />
<br />
===Tags, labels, and selectors===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-pod-label.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-pod-label.yml<br />
master$ kubectl get pods -l app=nginx<br />
master$ kubectl describe pods -l app=nginx<br />
<br />
* Add labels or overwrite existing ones:<br />
master$ kubectl label pods nginx new-label=mynginx<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=nginx<br />
master$ kubectl label pods nginx new-label=foo<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=foo<br />
<br />
===Deployments===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-prod.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-prod<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-prod<br />
spec:<br />
containers:<br />
- name: nginx-deployment-prod<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create --validate -f nginx-deployment-dev.yml<br />
master$ kubectl create --validate -f nginx-deployment-prod.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 5m<br />
nginx-deployment-prod-3051195443-hj9b1 1/1 Running 0 12m<br />
</pre><br />
<br />
master$ kubectl describe deployments -l app=nginx-deployment-dev<br />
<pre><br />
Name: nginx-deployment-dev<br />
Namespace: default<br />
CreationTimestamp: Thu, 20 Oct 2016 23:48:46 +0000<br />
Labels: app=nginx-deployment-dev<br />
Selector: app=nginx-deployment-dev<br />
Replicas: 1 updated | 1 total | 1 available | 0 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-dev-2568522567 (1/1 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment-prod 1 1 1 1 44s<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev-update.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.8 # ***CHANGED***<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl apply -f nginx-deployment-dev-update.yml<br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 0/1 ContainerCreating 0 27s<br />
</pre><br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 6m<br />
</pre><br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment nginx-deployment-dev<br />
master$ kubectl delete deployment nginx-deployment-prod<br />
<br />
===Multi-Pod (container) replication controller===<br />
<br />
* Start the other two nodes (the ones we previously stopped):<br />
minion2$ systemctl start kubelet kube-proxy<br />
minion3$ systemctl start kubelet kube-proxy<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 2h<br />
k8s.minion2.dev Ready 2h<br />
k8s.minion3.dev Ready 2h<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-multi-node.yml<br />
---<br />
apiVersion: v1<br />
kind: ReplicationController<br />
metadata:<br />
name: nginx-www<br />
spec:<br />
replicas: 3<br />
selector:<br />
app: nginx<br />
template:<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-multi-node.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 0/1 ContainerCreating 0 10s<br />
nginx-www-416ct 0/1 ContainerCreating 0 10s<br />
nginx-www-ax41w 0/1 ContainerCreating 0 10s<br />
</pre><br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 1m<br />
nginx-www-416ct 1/1 Running 0 1m<br />
nginx-www-ax41w 1/1 Running 0 1m<br />
</pre><br />
<br />
master$ kubectl describe pods | awk '/^Node/{print $2}'<br />
<pre><br />
k8s.minion2.dev/192.168.200.102<br />
k8s.minion1.dev/192.168.200.101<br />
k8s.minion3.dev/192.168.200.103<br />
</pre><br />
<br />
minion1$ docker ps # 1 nginx container running<br />
minion2$ docker ps # 1 nginx container running<br />
minion3$ docker ps # 1 nginx container running<br />
minion3$ docker ps --format "<nowiki>{{.Image}}</nowiki>"<br />
<pre><br />
nginx<br />
gcr.io/google_containers/pause:2.0<br />
</pre><br />
<br />
master$ kubectl describe replicationcontroller<br />
<pre><br />
Name: nginx-www<br />
Namespace: default<br />
Image(s): nginx<br />
Selector: app=nginx<br />
Labels: app=nginx<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
...<br />
</pre><br />
<br />
* Attempt to delete one of the three pods:<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 11m<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
master$ kubectl delete pod nginx-www-2evxu<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-3cck4 1/1 Running 0 12s<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
<br />
A new pod (<code>nginx-www-3cck4</code>) automatically started up. This is because the expected state, as defined in our YAML file, is for there to be 3 pods running at all times. Thus, if one or more of the pods were to go down, a new pod (or pods) will automatically start up to bring the state back to the expected state.<br />
<br />
* To force-delete all pods:<br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Create and deploy service definitions===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-service.yml<br />
---<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: nginx-service<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
</pre><br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
</pre><br />
master$ kubectl create -f nginx-service.yml<br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
nginx-service 10.254.110.127 <none> 8000/TCP 10s<br />
</pre><br />
<br />
master$ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i<br />
busybox$ wget -qO- 10.254.110.127:8000 # works<br />
<br />
* Cleanup<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete service nginx-service<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-jh2e9 1/1 Running 0 13m<br />
nginx-www-jir2g 1/1 Running 0 13m<br />
nginx-www-w91uw 1/1 Running 0 13m<br />
</pre><br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Creating temporary Pods at the CLI===<br />
<br />
* Make sure we have no Pods running:<br />
master$ kubectl get pods<br />
<br />
* Create temporary deployment pod:<br />
master$ kubectl run mysample --image=foobar/apache<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
mysample-1424711890-fhtxb 0/1 ContainerCreating 0 1s<br />
</pre><br />
master$ kubectl get deployment <br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
mysample 1 1 1 0 7s<br />
</pre><br />
<br />
* Create a temporary deployment pod (where we know it will fail):<br />
master$ kubectl run myexample --image=christophchamp/ubuntu_sysadmin<br />
master$ kubectl -o wide get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myexample-3534121234-mpr35 0/1 CrashLoopBackOff 12 39m k8s.minion3.dev<br />
mysample-2812764540-74c5h 1/1 Running 0 41m k8s.minion2.dev<br />
</pre><br />
<br />
* Check on why the "myexample" pod is in status "CrashLoopBackOff":<br />
master$ kubectl describe pods/myexample-3534121234-mpr35<br />
master$ kubectl describe deployments/mysample<br />
master$ kubectl describe pods/mysample-2812764540-74c5h | awk '/^Node/{print $2}'<br />
k8s.minion2.dev/192.168.200.102<br />
<br />
master$ kubectl delete deployment mysample<br />
<br />
* Run multiple replicas of the same pod:<br />
master$ kubectl run myreplicas --image=latest123/apache --replicas=2 --labels=app=myapache,version=1.0.0<br />
master$ kubectl describe deployment myreplicas <br />
<pre><br />
Name: myreplicas<br />
Namespace: default<br />
CreationTimestamp: Fri, 21 Oct 2016 19:10:30 +0000<br />
Labels: app=myapache,version=1.0.0<br />
Selector: app=myapache,version=1.0.0<br />
Replicas: 2 updated | 2 total | 1 available | 1 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: myreplicas-2209834598 (2/2 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myreplicas-2209834598-5iyer 1/1 Running 0 1m k8s.minion1.dev<br />
myreplicas-2209834598-cslst 1/1 Running 0 1m k8s.minion2.dev<br />
</pre><br />
<br />
master$ kubectl describe pods -l version=1.0.0<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myreplicas<br />
<br />
===Interacting with Pod containers===<br />
<br />
* Create example Apache pod definition file:<br />
<pre><br />
master$ cat << EOF > apache.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: apache<br />
spec:<br />
containers:<br />
- name: apache<br />
image: latest123/apache<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl create -f apache.yml<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
<br />
* Test pod and make some basic configuration changes:<br />
master$ kubectl exec apache date<br />
master$ kubectl exec mypod -i -t -- cat /var/www/html/index.html # default apache HTML<br />
master$ kubectl exec apache -i -t -- /bin/bash<br />
container$ export TERM=xterm<br />
container$ echo "xtof test" > /var/www/html/index.html<br />
minion3$ curl 172.17.0.2<br />
xtof test<br />
container$ exit<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
Pod/container is still running even after we exited (as expected).<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Logs===<br />
<br />
* Start our example Apache pod to use for checking Kubernetes logging features:<br />
master$ kubectl create -f apache.yml <br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
apache 1/1 Running 0 9s<br />
</pre><br />
master$ kubectl logs apache<br />
<pre><br />
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message<br />
</pre><br />
master$ kubectl logs --tail=10 apache<br />
master$ kubectl logs --since=24h apache # or 10s, 2m, etc.<br />
master$ kubectl logs -f apache # follow the logs<br />
master$ kubectl logs -f -c apache apache # where -c is the container ID<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Autoscaling and scaling Pods===<br />
<br />
master$ kubectl run myautoscale --image=latest123/apache --port=80 --labels=app=myautoscale<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 47s k8s.minion3.dev<br />
</pre><br />
<br />
* Create an autoscale definition:<br />
master$ kubectl autoscale deployment myautoscale --min=2 --max=6 --cpu-percent=80<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 2 2 2 2 4m<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 3m k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4s k8s.minion2.dev<br />
</pre><br />
<br />
* Scale up an already autoscaled deployment:<br />
master$ kubectl scale --current-replicas=2 --replicas=4 deployment/myautoscale<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 4 4 4 4 8m<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-2rxhp 1/1 Running 0 8s k8s.minion1.dev<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 7m k8s.minion3.dev<br />
myautoscale-3243017378-ozxs8 1/1 Running 0 8s k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4m k8s.minion2.dev<br />
</pre><br />
<br />
* Scale down:<br />
master$ kubectl scale --current-replicas=4 --replicas=2 deployment/myautoscale<br />
<br />
Note: You can not scale down past the original minimum number of pods/containers specified in the original autoscale deployment (i.e., min=2 in our example).<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myautoscale<br />
<br />
===Failure and recovery===<br />
<br />
master$ kubectl run myrecovery --image=latest123/apache --port=80 --replicas=2 --labels=app=myrecovery<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myrecovery 2 2 2 2 6s<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-5xu8f 1/1 Running 0 12s k8s.minion1.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 12s k8s.minion2.dev<br />
</pre><br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the minions/nodes (so we have a total of 2 nodes online):<br />
minion1$ systemctl stop docker kubelet kube-proxy<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-qyi04 1/1 Running 0 7m k8s.minion3.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 14m k8s.minion2.dev<br />
</pre><br />
Pod switch from minion1 to minion3.<br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the remaining online minions/nodes (so we have a total of 1 node online):<br />
minion2$ systemctl stop docker kubelet kube-proxy<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-b5tim 1/1 Running 0 2m k8s.minion3.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 17m k8s.minion3.dev<br />
</pre><br />
Both Pods are now running on minion3, the only available node.<br />
<br />
* Start up Kubernetes- and Docker-related services again on minion1 and delete one of the Pods:<br />
minion1$ systemctl start docker kubelet kube-proxy<br />
master$ kubectl delete pod myrecovery-563119102-b5tim<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-8unzg 1/1 Running 0 1m k8s.minion1.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 20m k8s.minion3.dev<br />
</pre><br />
Pods are now running on separate nodes.<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployments/myrecovery<br />
<br />
==Minikube==<br />
[https://github.com/kubernetes/minikube Minikube] is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.<br />
<br />
* Install Minikube:<br />
$ curl -Lo minikube <nowiki>https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64</nowiki> \<br />
&& chmod +x minikube && sudo mv minikube /usr/local/bin/<br />
<br />
* Install kubectl<br />
$ curl -Lo kubectl <nowiki>https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl</nowiki> \<br />
&& chmod +x kubectl && sudo mv kubectl /usr/local/bin/<br />
<br />
* Test install<br />
$ minikube start<br />
#~OR~<br />
$ minikube start --memory 4096 # give it 4GB of RAM<br />
$ minikube status<br />
$ minikube dashboard<br />
$ kubectl config view<br />
$ kubectl cluster-info<br />
<br />
NOTE: If you have an old version of minikube installed, you should probably do the following before upgrading to a much newer version:<br />
$ minikube delete --all --purge<br />
<br />
Get the details on the CLI options for kubectl [https://kubernetes.io/docs/reference/kubectl/overview/ here].<br />
<br />
Using the <code>`kubectl proxy`</code> command, kubectl will authenticate with the API Server on the Master Node and would make the dashboard available on <nowiki>http://localhost:8001/ui</nowiki>:<br />
<br />
$ kubectl proxy<br />
Starting to serve on 127.0.0.1:8001<br />
<br />
After running the above command, we can access the dashboard at <code><nowiki>http://127.0.0.1:8001/ui</nowiki></code>.<br />
<br />
Once the kubectl proxy is configured, we can send requests to localhost on the proxy port:<br />
<br />
$ curl <nowiki>http://localhost:8001/</nowiki><br />
$ curl <nowiki>http://localhost:8001/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "8",<br />
"gitVersion": "v1.8.0",<br />
"gitCommit": "0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4",<br />
"gitTreeState": "clean",<br />
"buildDate": "2017-11-29T22:43:34Z",<br />
"goVersion": "go1.9.1",<br />
"compiler": "gc",<br />
"platform": "linux/amd64"<br />
}<br />
</pre><br />
<br />
Without kubectl proxy configured, we can get the Bearer Token using kubectl, and then send it with the API request. A Bearer Token is an access token which is generated by the authentication server (the API server on the Master Node) and given back to the client. Using that token, the client can connect back to the Kubernetes API server without providing further authentication details, and then, access resources.<br />
<br />
* Get the k8s token:<br />
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | awk '/^default/{print $1}') | awk '/^token/{print $2}')<br />
<br />
* Get the k8s API server endpoint:<br />
$ APISERVER=$(kubectl config view | awk '/https/{print $2}')<br />
<br />
* Access the API Server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}<br />
<br />
===Using Minikube as a local Docker registry===<br />
<br />
Sometimes it is useful to have a local Docker registry for Kubernetes to pull images from. As the Minikube [https://github.com/kubernetes/minikube/blob/0c616a6b42b28a1aab8397f5a9061f8ebbd9f3d9/README.md#reusing-the-docker-daemon README] describes, you can reuse the Docker daemon running within Minikube with <code>eval $(minikube docker-env)</code> to build and pull images from.<br />
<br />
To use an image without uploading it to some external resgistry (e.g., Docker Hub), you can follow these steps:<br />
* Set the environment variables with <code>eval $(minikube docker-env)</code><br />
* Build the image with the Docker daemon of Minikube (e.g., <code>docker build -t my-image .</code>)<br />
* Set the image in the pod spec like the build tag (e.g., <code>my-image</code>)<br />
* Set the <code>imagePullPolicy</code> to <code>Never</code>, otherwise Kubernetes will try to download the image.<br />
<br />
Important note: You have to run <code>eval $(minikube docker-env)</code> on each terminal you want to use since it only sets the environment variables for the current shell session.<br />
<br />
===Working with our Minikube-based Kubernetes cluster===<br />
<br />
;Kubernetes Object Model<br />
<br />
Kubernetes has a very rich object model, with which it represents different persistent entities in the Kubernetes cluster. Those entities describe:<br />
<br />
* What containerized applications we are running and on which node<br />
* Application resource consumption<br />
* Different policies attached to applications, like restart/upgrade policies, fault tolerance, etc.<br />
<br />
With each object, we declare our intent or desired state using the '''spec''' field. The Kubernetes system manages the '''status''' field for objects, in which it records the actual state of the object. At any given point in time, the Kubernetes Control Plane tries to match the object's actual state to the object's desired state.<br />
<br />
Examples of Kubernetes objects are Pods, Deployments, ReplicaSets, etc.<br />
<br />
To create an object, we need to provide the '''spec''' field to the Kubernetes API Server. The '''spec''' field describes the desired state, along with some basic information, like the name. The API request to create the object must have the '''spec''' field, as well as other details, in a JSON format. Most often, we provide an object's definition in a YAML file, which is converted by kubectl in a JSON payload and sent to the API Server.<br />
<br />
Below is an example of a ''Deployment'' object:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
With the '''apiVersion''' field in the example above, we mention the API endpoint on the API Server which we want to connect to. Note that you can see what API version to use with the following call to the API server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}/apis/apps<br />
Use the '''preferredVersion''' for most cases.<br />
<br />
With the '''kind''' field, we mention the object type &mdash; in our case, we have '''Deployment'''. With the '''metadata''' field, we attach the basic information to objects, like the name. Notice that in the above we have two '''spec''' fields ('''spec''' and '''spec.template.spec'''). With '''spec''', we define the desired state of the deployment. In our example, we want to make sure that, at any point in time, at least 3 ''Pods'' are running, which are created using the Pod template defined in '''spec.template'''. In '''spec.template.spec''', we define the desired state of the Pod (here, our Pod would be created using nginx:1.7.9).<br />
<br />
Once the object is created, the Kubernetes system attaches the '''status''' field to the object.<br />
<br />
;Connecting users to Pods<br />
<br />
To access the application, a user/client needs to connect to the Pods. As Pods are ephemeral in nature, resources like IP addresses allocated to it cannot be static. Pods could die abruptly or be rescheduled based on existing requirements.<br />
<br />
As an example, consider a scenario in which a user/client is connecting to a Pod using its IP address. Unexpectedly, the Pod to which the user/client is connected dies and a new Pod is created by the controller. The new Pod will have a new IP address, which will not be known automatically to the user/client of the earlier Pod. To overcome this situation, Kubernetes provides a higher-level abstraction called ''[https://kubernetes.io/docs/concepts/services-networking/service/ Service]'', which logically groups Pods and a policy to access them. This grouping is achieved via Labels and Selectors (see above).<br />
<br />
So, for our example, we would use Selectors (e.g., "<code>app==frontend</code>" and "<code>app==db</code>") to group our Pods into two logical groups. We can assign a name to the logical grouping, referred to as a "service name". In our example, we have created two Services, <code>frontend-svc</code> and <code>db-svc</code>, and they have the "<code>app==frontend</code>" and the "<code>app==db</code>" Selectors, respectively.<br />
<br />
The following is an example of a Service object:<br />
<pre><br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: frontend-svc<br />
spec:<br />
selector:<br />
app: frontend<br />
ports:<br />
- protocol: TCP<br />
port: 80<br />
targetPort: 5000<br />
</pre><br />
<br />
in which we are creating a <code>frontend-svc</code> Service by selecting all the Pods that have the Label "<code>app</code>" equal to "<code>frontend</code>". By default, each Service also gets an IP address, which is routable only inside the cluster. In our case, we have 172.17.0.4 and 172.17.0.5 IP addresses for our <code>frontend-svc</code> and <code>db-svc</code> Services, respectively. The IP address attached to each Service is also known as the ClusterIP for that Service.<br />
<br />
+------------------------------------+<br />
| select: app==frontend | container (app:frontend; 10.0.1.3)<br />
| service=frontend-svc (172.17.0.4) |------> container (app:frontend; 10.0.1.4)<br />
+------------------------------------+ container (app:frontend; 10.0.1.5)<br />
^<br />
/<br />
/<br />
user/client<br />
\<br />
\<br />
v<br />
+------------------------------------+<br />
| select: app==db |------> container (app:db; 10.0.1.10)<br />
| service=db-svc (172.17.0.5) |<br />
+------------------------------------+<br />
<br />
The user/client now connects to a Service via ''its'' IP address, which forwards the traffic to one of the Pods attached to it. A Service does the load balancing while selecting the Pods for forwarding the data/traffic.<br />
<br />
While forwarding the traffic from the Service, we can select the target port on the Pod. In our example, for <code>frontend-svc</code>, we will receive requests from the user/client on port 80. We will then forward these requests to one of the attached Pods on port 5000. If the target port is not defined explicitly, then traffic will be forwarded to Pods on the port on which the Service receives traffic.<br />
<br />
A tuple of Pods, IP addresses, along with the <code>targetPort</code> is referred to as a ''Service Endpoint''. In our case, <code>frontend-svc</code> has 3 Endpoints: <code>10.0.1.3:5000</code>, <code>10.0.1.4:5000</code>, and <code>10.0.1.5:5000</code>.<br />
<br />
===kube-proxy===<br />
All of the Worker Nodes run a daemon called kube-proxy, which watches the API Server on the Master Node for the addition and removal of Services and endpoints. For each new Service, on each node, kube-proxy configures the IPtables rules to capture the traffic for its ClusterIP and forwards it to one of the endpoints. When the Service is removed, kube-proxy removes the IPtables rules on all nodes as well.<br />
<br />
===Service discovery===<br />
As Services are the primary mode of communication in Kubernetes, we need a way to discover them at runtime. Kubernetes supports two methods of discovering a Service:<br />
<br />
;Environment Variables : As soon as the Pod starts on any Worker Node, the kubelet daemon running on that node adds a set of environment variables in the Pod for all active Services. For example, if we have an active Service called <code>redis-master</code>, which exposes port 6379, and its ClusterIP is 172.17.0.6, then, on a newly created Pod, we can see the following environment variables:<br />
<br />
REDIS_MASTER_SERVICE_HOST=172.17.0.6<br />
REDIS_MASTER_SERVICE_PORT=6379<br />
REDIS_MASTER_PORT=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp<br />
REDIS_MASTER_PORT_6379_TCP_PORT=6379<br />
REDIS_MASTER_PORT_6379_TCP_ADDR=172.17.0.6<br />
<br />
With this solution, we need to be careful while ordering our Services, as the Pods will not have the environment variables set for Services which are created after the Pods are created.<br />
<br />
;DNS : Kubernetes has an add-on for DNS, which creates a DNS record for each Service and its format is like <code>my-svc.my-namespace.svc.cluster.local</code>. Services within the same namespace can reach other services with just their name. For example, if we add a Service <code>redis-master</code> in the <code>my-ns</code> Namespace, then all the Pods in the same Namespace can reach to the redis Service just by using its name, <code>redis-master</code>. Pods from other Namespaces can reach the Service by adding the respective Namespace as a suffix, like <code>redis-master.my-ns</code>.<br />
: This is the most common and highly recommended solution. For example, in the previous section's image, we have seen that an internal DNS is configured, which maps our services <code>frontend-svc</code> and <code>db-svc</code> to 172.17.0.4 and 172.17.0.5, respectively.<br />
<br />
===Service Type===<br />
While defining a Service, we can also choose its access scope. We can decide whether the Service:<br />
<br />
* is only accessible within the cluster;<br />
* is accessible from within the cluster and the external world; or<br />
* maps to an external entity which resides outside the cluster.<br />
<br />
Access scope is decided by ''ServiceType'', which can be mentioned when creating the Service.<br />
<br />
;ClusterIP : (the default ''ServiceType''.) A Service gets its Virtual IP address using the ClusterIP. That IP address is used for communicating with the Service and is accessible only within the cluster. <br />
<br />
;NodePort : With this ''ServiceType'', in addition to creating a ClusterIP, a port from the range '''30000-32767''' is mapped to the respective service from all the Worker Nodes. For example, if the mapped NodePort is 32233 for the service <code>frontend-svc</code>, then, if we connect to any Worker Node on port 32233, the node would redirect all the traffic to the assigned ClusterIP (172.17.0.4).<br />
: By default, while exposing a NodePort, a random port is automatically selected by the Kubernetes Master from the port range '''30000-32767'''. If we do not want to assign a dynamic port value for NodePort, then, while creating the Service, we can also give a port number from the earlier specific range.<br />
: The NodePort ServiceType is useful when we want to make our services accessible from the external world. The end-user connects to the Worker Nodes on the specified port, which forwards the traffic to the applications running inside the cluster. To access the application from the external world, administrators can configure a reverse proxy outside the Kubernetes cluster and map the specific endpoint to the respective port on the Worker Nodes.<br />
<br />
;LoadBalancer: With this ''ServiceType'', we have the following:<br />
:* NodePort and ClusterIP Services are automatically created, and the external load balancer will route to them;<br />
:* The Services are exposed at a static port on each Worker Node; and<br />
:* The Service is exposed externally using the underlying Cloud provider's load balancer feature.<br />
: The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS.<br />
<br />
;ExternalIP : A Service can be mapped to an ExternalIP address if it can route to one or more of the Worker Nodes. Traffic that is ingressed into the cluster with the ExternalIP (as destination IP) on the Service port, gets routed to one of the the Service endpoints. (Note that ExternalIPs are not managed by Kubernetes. The cluster administrator(s) must have configured the routing to map the ExternalIP address to one of the nodes.)<br />
<br />
;ExternalName : a special ''ServiceType'', which has no Selectors and does not define any endpoints. When accessed within the cluster, it returns a CNAME record of an externally configured service.<br />
: The primary use case of this ServiceType is to make externally configured services like <code>my-database.example.com</code> available inside the cluster, using just the name, like <code>my-database</code>, to other services inside the same Namespace.<br />
<br />
===Deploying a application===<br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
EOF<br />
</pre><br />
<br />
$ kubectl get service<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h<br />
web-service NodePort 10.104.107.132 <none> 80:32610/TCP 7m<br />
<br />
Note that "<code>32610</code>" port.<br />
<br />
* Get the IP address of your Minikube k8s cluster<br />
$ minikube ip<br />
192.168.99.100<br />
#~OR~<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
* Now, check that your web service is serving up a default Nginx website:<br />
$ curl -I <nowiki>http://192.168.99.100:32610</nowiki><br />
HTTP/1.1 200 OK<br />
Server: nginx/1.13.8<br />
Date: Thu, 11 Jan 2018 00:27:51 GMT<br />
Content-Type: text/html<br />
Content-Length: 612<br />
Last-Modified: Wed, 10 Jan 2018 04:10:03 GMT<br />
Connection: keep-alive<br />
ETag: "5a55921b-264"<br />
Accept-Ranges: bytes<br />
<br />
Looks good!<br />
<br />
Finally, destroy the webserver deployment:<br />
$ kubectl delete deployments webserver<br />
<br />
===Using Ingress with Minikube===<br />
<br />
* First check that the Ingress add-on is enabled:<br />
$ minikube addons list | grep ingress<br />
- ingress: disabled<br />
<br />
If it is not, enable it with:<br />
$ minikube addons enable ingress<br />
$ minikube addons list | grep ingress<br />
- ingress: enabled<br />
<br />
* Create an Echo Server Deployment:<br />
<pre><br />
$ cat << EOF >deploy-echoserver.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: echoserver<br />
template:<br />
metadata:<br />
labels:<br />
run: echoserver<br />
spec:<br />
containers:<br />
- image: gcr.io/google_containers/echoserver:1.4<br />
imagePullPolicy: IfNotPresent<br />
name: echoserver<br />
ports:<br />
- containerPort: 8080<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-echoserver.yml<br />
<br />
* Create the Cheddar cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-cheddar-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: cheddar-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:cheddar<br />
imagePullPolicy: IfNotPresent<br />
name: cheddar-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-stilton-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: stilton-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:stilton<br />
imagePullPolicy: IfNotPresent<br />
name: stilton-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create the Echo Server Service:<br />
<pre><br />
$ cat << EOF >svc-echoserver.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 31116<br />
port: 8080<br />
protocol: TCP<br />
targetPort: 8080<br />
selector:<br />
run: echoserver<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-echoserver.yml<br />
<br />
* Create the Cheddar cheese Service:<br />
<pre><br />
$ cat << EOF >svc-cheddar-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 32467<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: cheddar-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
</pre><br />
$ kubectl create --validate -f svc-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Service:<br />
<pre><br />
$ cat << EOF >svc-stilton-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 30197<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: stilton-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-stilton-cheese.yml<br />
<br />
* Create the Ingress for the above Services:<br />
<pre><br />
$ cat << EOF >ingress-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: ingress-cheese<br />
annotations:<br />
nginx.ingress.kubernetes.io/rewrite-target: /<br />
spec:<br />
backend:<br />
serviceName: default-http-backend<br />
servicePort: 80<br />
rules:<br />
- host: myminikube.info<br />
http:<br />
paths:<br />
- path: /<br />
backend:<br />
serviceName: echoserver<br />
servicePort: 8080<br />
- host: cheeses.all<br />
http:<br />
paths:<br />
- path: /stilton<br />
backend:<br />
serviceName: stilton-cheese<br />
servicePort: 80<br />
- path: /cheddar<br />
backend:<br />
serviceName: cheddar-cheese<br />
servicePort: 80<br />
</pre><br />
$ kubectl create --validate -f ingress-cheese.yml<br />
<br />
* Check that everything is up:<br />
<pre><br />
$ kubectl get all<br />
NAME READY STATUS RESTARTS AGE<br />
pod/cheddar-cheese-d6d6587c7-4bgcz 1/1 Running 0 12m<br />
pod/echoserver-55f97d5bff-pdv65 1/1 Running 0 12m<br />
pod/stilton-cheese-6d64cbc79-g7h4w 1/1 Running 0 12m<br />
<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
service/cheddar-cheese NodePort 10.109.238.92 <none> 80:32467/TCP 12m<br />
service/echoserver NodePort 10.98.60.194 <none> 8080:31116/TCP 12m<br />
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h<br />
service/stilton-cheese NodePort 10.108.175.207 <none> 80:30197/TCP 12m<br />
<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
deployment.apps/cheddar-cheese 1 1 1 1 12m<br />
deployment.apps/echoserver 1 1 1 1 12m<br />
deployment.apps/stilton-cheese 1 1 1 1 12m<br />
<br />
NAME DESIRED CURRENT READY AGE<br />
replicaset.apps/cheddar-cheese-d6d6587c7 1 1 1 12m<br />
replicaset.apps/echoserver-55f97d5bff 1 1 1 12m<br />
replicaset.apps/stilton-cheese-6d64cbc79 1 1 1 12m<br />
<br />
$ kubectl get ing<br />
NAME HOSTS ADDRESS PORTS AGE<br />
ingress-cheese myminikube.info,cheeses.all 10.0.2.15 80 12m<br />
</pre><br />
<br />
* Add your host aliases:<br />
$ echo "$(minikube ip) myminikube.info cheeses.all" | sudo tee -a /etc/hosts<br />
<br />
* Now, either using your browser or [[curl]], check that you can reach all of the endpoints defined in the Ingress:<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/cheddar/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/stilton/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null myminikube.info # Should return '200'<br />
<br />
* You can also see the Nginx logs for the above requests with:<br />
$ kubectl --namespace kube-system logs \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller<br />
<br />
* You can also view the Nginx configuration file (and the settings created by the above Ingress) with:<br />
$ NGINX_POD=$(kubectl --namespace kube-system get pods \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller \<br />
--output jsonpath='{.items[0].metadata.name}')<br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- cat /etc/nginx/nginx.conf<br />
<br />
* Get the version of the Nginx Ingress controller installed:<br />
<pre><br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- /nginx-ingress-controller --version<br />
-------------------------------------------------------------------------------<br />
NGINX Ingress controller<br />
Release: 0.19.0<br />
Build: git-05025d6<br />
Repository: https://github.com/kubernetes/ingress-nginx.git<br />
-------------------------------------------------------------------------------<br />
</pre><br />
<br />
==Kubectl==<br />
<br />
<code>kubectl</code> controls the Kubernetes cluster manager.<br />
<br />
* View your current configuration:<br />
$ kubectl config view<br />
<br />
* Switch between clusters:<br />
$ kubectl config use-context <context_name><br />
<br />
* Remove a cluster:<br />
$ kubectl config unset contexts.<context_name><br />
$ kubectl config unset users.<user_name><br />
$ kubectl config unset clusters.<cluster_name><br />
<br />
* Sort Pods by age:<br />
$ kubectl get po --sort-by='{.firstTimestamp}'.<br />
$ kubectl get pods --all-namespaces --sort-by=.metadata.creationTimestamp<br />
<br />
* Backup all primitives deployed in a given k8s cluster:<br />
<pre><br />
$ kubectl api-resources --verbs=list --namespaced -o name \<br />
| xargs -n1 -I{} bash -c "kubectl get {} --all-namespaces -oyaml && echo ---" \<br />
> k8s_backup.yaml<br />
</pre><br />
<br />
===kubectl explain===<br />
<br />
;List the fields for supported resources.<br />
<br />
* Get the documentation of a resource (aka "kind") and its fields:<br />
<pre><br />
$ kubectl explain deployment<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
APIVersion defines the versioned schema of this representation of an<br />
object. Servers should convert recognized schemas to the latest internal<br />
value, and may reject unrecognized values. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources<br />
<br />
kind <string><br />
Kind is a string value representing the REST resource this object<br />
represents. Servers may infer this from the endpoint the client submits<br />
requests to. Cannot be updated. In CamelCase. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds<br />
<br />
metadata <Object><br />
Standard object metadata.<br />
<br />
spec <Object><br />
Specification of the desired behavior of the Deployment.<br />
<br />
status <Object><br />
Most recently observed status of the Deployment<br />
</pre><br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
</pre><br />
<br />
* Get a list of ''all'' allowable fields for a given primitive:<br />
<pre><br />
$ kubectl explain deployment --recursive | head<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
kind <string><br />
metadata <Object><br />
</pre><br />
<br />
* Get documentation ("man page"-style) for a given field in a given primitive:<br />
<pre><br />
$ kubectl explain deployment.status.availableReplicas<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
FIELD: availableReplicas <integer><br />
<br />
DESCRIPTION:<br />
Total number of available pods (ready for at least minReadySeconds)<br />
targeted by this deployment.<br />
</pre><br />
<br />
===Merge kubeconfig files===<br />
<br />
* Reference which kubeconfig files you wish to merge:<br />
$ export KUBECONFIG=$HOME/.kube/dev.yaml:$HOME/.kube/prod.yaml<br />
<br />
* Flatten them:<br />
$ kubectl config view --flatten >> $HOME/.kube/config<br />
<br />
* Unset:<br />
$ unset KUBECONFIG<br />
<br />
Merge complete.<br />
<br />
==Namespaces==<br />
<br />
See: [https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces] in the official documentation.<br />
<br />
; Create a Namespace<br />
<br />
<pre><br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: dev<br />
</pre><br />
<br />
==Pods==<br />
<br />
; Create a Pod that has an Init Container<br />
<br />
In this example, I will create a Pod that has one application Container and one Init Container. The init container runs to completion before the application container starts.<br />
<br />
<pre><br />
$ cat << EOF >init-demo.yml<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: init-demo<br />
labels:<br />
app: demo<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: /usr/share/nginx/html<br />
# These containers are run during pod initialization<br />
initContainers:<br />
- name: install<br />
image: busybox<br />
command:<br />
- wget<br />
- "-O"<br />
- "/work-dir/index.html"<br />
- https://example.com<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: "/work-dir"<br />
dnsPolicy: Default<br />
volumes:<br />
- name: workdir<br />
emptyDir: {}<br />
EOF<br />
</pre><br />
<br />
The above Pod YAML will first create the init container using the busybox image, which will download the HTML of the example.com website and save it to a file (<code>index.html</code>) on the Pod volume called "workdir". After the init container completes, the Nginx container starts and presents the <code>index.html</code> on port 80 (the file is located at <code>/usr/share/nginx/index.html</code> inside the Nginx container as a volume mount).<br />
<br />
* Now, create this Pod:<br />
$ kubectl create --validate -f init-demo.yml<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat << EOF >example.yml<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: example<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: demo<br />
</pre><br />
<br />
* Check that we can get the header of <nowiki>https://example.com</nowiki>:<br />
$ curl -sI $(kubectl get svc/foo-svc -o jsonpath='{.spec.clusterIP}'):8000 | grep ^HTTP<br />
HTTP/1.1 200 OK<br />
<br />
==Deployments==<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' controller provides declarative updates for Pods and ReplicaSets.<br />
<br />
You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.<br />
<br />
; Creating a Deployment<br />
<br />
The following is an example of a Deployment. It creates a ReplicaSet to bring up three [https://hub.docker.com/_/nginx/ Nginx] Pods:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Check the syntax of the Deployment (YAML):<br />
$ kubectl create -f nginx-deployment.yml --dry-run<br />
deployment.apps/nginx-deployment created (dry run)<br />
<br />
* Create the Deployment:<br />
$ kubectl create --record -f nginx-deployment.yml <br />
deployment "nginx-deployment" created<br />
Note: By appending <code>--record</code> to the above command, we are telling the API to record the current command in the annotations of the created or updated resource. This is useful for future review, such as investigating which commands were executed in each Deployment revision.<br />
<br />
* Get information about our Deployment:<br />
$ kubectl get deployments<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 24s<br />
<br />
$ kubectl describe deployment/nginx-deployment<br />
<pre><br />
Name: nginx-deployment<br />
Namespace: default<br />
CreationTimestamp: Tue, 30 Jan 2018 23:28:43 +0000<br />
Labels: app=nginx<br />
Annotations: deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Selector: app=nginx<br />
Replicas: 3 desired | 3 updated | 3 total | 0 available | 3 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 25% max unavailable, 25% max surge<br />
Pod Template:<br />
Labels: app=nginx<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Conditions:<br />
Type Status Reason<br />
---- ------ ------<br />
Available False MinimumReplicasUnavailable<br />
Progressing True ReplicaSetUpdated<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-6c54bd5869 (3/3 replicas created)<br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal ScalingReplicaSet 28s deployment-controller Scaled up replica set nginx-deployment-6c54bd5869 to 3<br />
</pre><br />
<br />
* Get information about the ReplicaSet created by the above Deployment:<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-6c54bd5869 3 3 3 3m<br />
<br />
$ kubectl describe rs/nginx-deployment-6c54bd5869<br />
<pre><br />
Name: nginx-deployment-6c54bd5869<br />
Namespace: default<br />
Selector: app=nginx,pod-template-hash=2710681425<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Annotations: deployment.kubernetes.io/desired-replicas=3<br />
deployment.kubernetes.io/max-replicas=4<br />
deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Controlled By: Deployment/nginx-deployment<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-k9mh4<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-pphjt<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-n4fj5<br />
</pre><br />
<br />
* Get information about the Pods created by this Deployment:<br />
$ kubectl get pods --show-labels -l app=nginx -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE LABELS<br />
nginx-deployment-6c54bd5869-k9mh4 1/1 Running 0 5m 10.244.1.5 k8s.worker1.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-n4fj5 1/1 Running 0 5m 10.244.1.6 k8s.worker2.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-pphjt 1/1 Running 0 5m 10.244.1.7 k8s.worker3.local app=nginx,pod-template-hash=2710681425<br />
<br />
;Updating a Deployment<br />
<br />
Note: A Deployment's rollout is triggered if, and only if, the Deployment's pod template (that is, <code>.spec.template</code>) is changed (for example, if the labels or container images of the template are updated). Other updates, such as scaling the Deployment, do not trigger a rollout.<br />
<br />
Suppose that we want to update the Nginx Pods in the above Deployment to use the <code>nginx:1.9.1</code> image instead of the <code>nginx:1.7.9</code> image.<br />
<br />
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
deployment "nginx-deployment" image updated<br />
<br />
Alternatively, we can edit the Deployment and change <code>.spec.template.spec.containers[0].image</code> from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>:<br />
<br />
$ kubectl edit deployment/nginx-deployment<br />
deployment "nginx-deployment" edited<br />
<br />
* Check on the rollout status:<br />
<pre><br />
$ kubectl rollout status deployment/nginx-deployment<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
deployment "nginx-deployment" successfully rolled out<br />
</pre><br />
<br />
* Get information about the updated Deployment:<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 18m<br />
<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-5964dfd755 3 3 3 1m # <- new ReplicaSet using nginx:1.9.1<br />
nginx-deployment-6c54bd5869 0 0 0 17m # <- old ReplicaSet using nginx:1.7.9<br />
<br />
$ kubectl rollout history deployment/nginx-deployment<br />
deployments "nginx-deployment"<br />
REVISION CHANGE-CAUSE<br />
1 kubectl create --record=true --filename=nginx-deployment.yml<br />
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
<br />
$ kubectl rollout history deployment/nginx-deployment --revision=2<br />
<br />
deployments "nginx-deployment" with revision #2<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=1520898311<br />
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
Containers:<br />
nginx:<br />
Image: nginx:1.9.1<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
<br />
; Rolling back to a previous revision<br />
<br />
Undo the current rollout and rollback to the previous revision:<br />
$ kubectl rollout undo deployment/nginx-deployment<br />
deployment "nginx-deployment" rolled back<br />
<br />
Alternatively, you can rollback to a specific revision by specify that in --to-revision:<br />
$ kubectl rollout undo deployment/nginx-deployment --to-revision=1<br />
deployment "nginx-deployment" rolled back<br />
<br />
==Volume management==<br />
On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. First, when a container crashes, kubelet will restart it, but the files will be lost (i.e., the container starts with a clean state). Second, when running containers together in a Pod it is often necessary to share files between those containers. The Kubernetes ''[https://kubernetes.io/docs/concepts/storage/volumes/ Volumes]'' abstraction solves both of these problems. A Volume is essentially a directory backed by a storage medium. The storage medium and its content are determined by the Volume Type.<br />
<br />
In Kubernetes, a Volume is attached to a Pod and shared among the containers of that Pod. The Volume has the same life span as the Pod, and it outlives the containers of the Pod &mdash; this allows data to be preserved across container restarts.<br />
<br />
Kubernetes resolves the problem of persistent storage with the Persistent Volume subsystem, which provides APIs for users and administrators to manage and consume storage. To manage the Volume, it uses the PersistentVolume (PV) API resource type, and to consume it, it uses the PersistentVolumeClaim (PVC) API resource type.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes PersistentVolume] (PV) : a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims PersistentVolumeClaim] (PVC) : a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Persistent Volume Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).<br />
<br />
A Persistent Volume is a network-attached storage in the cluster, which is provisioned by the administrator.<br />
<br />
Persistent Volumes can be provisioned statically by the administrator, or dynamically, based on the StorageClass resource. A StorageClass contains pre-defined provisioners and parameters to create a Persistent Volume.<br />
<br />
A PersistentVolumeClaim (PVC) is a request for storage by a user. Users request Persistent Volume resources based on size, access modes, etc. Once a suitable Persistent Volume is found, it is bound to a Persistent Volume Claim. After a successful bind, the Persistent Volume Claim resource can be used in a Pod. Once a user finishes its work, the attached Persistent Volumes can be released. The underlying Persistent Volumes can then be reclaimed and recycled for future usage. See [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims Persistent Volumes] for details.<br />
<br />
;Access Modes<br />
* Each of the following access modes ''must'' be supported by storage resource provider (e.g., NFS, AWS EBS, etc.) if they are to be used.<br />
* ReadWriteOnce (RWO) &mdash; volume can be mounted as read/write by one node only.<br />
* ReadOnlyMany (ROX) &mdash; volume can be mounted read-only by many nodes.<br />
* ReadWriteMany (RWX) &mdash; volume can be mounted read/write by many nodes.<br />
A volume can only be mounted using one access mode at a time, regardless of the modes that are supported.<br />
<br />
; Example #1 - Using Host Volumes<br />
As an example of how to use volumes, we can modify our previous "webserver" Deployment (see above) to look like the following:<br />
<br />
$ cat webserver.yml<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: hostvol<br />
mountPath: /usr/share/nginx/html<br />
volumes:<br />
- name: hostvol<br />
hostPath:<br />
path: /home/docker/vol<br />
</pre><br />
<br />
And use the same Service:<br />
$ cat webserver-svc.yml<br />
<pre><br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
</pre><br />
<br />
Then create the deployment and service:<br />
$ kubectl create -f webserver.yml<br />
$ kubectl create -f webserver-svc.yml<br />
<br />
Then, SSH into the webserver and run the following commands<br />
$ minikube ssh<br />
minikube> mkdir -p /home/docker/vol<br />
minikube> echo "Christoph testing" > /home/docker/vol/index.html<br />
minikube> exit<br />
<br />
Get the webserver IP and port:<br />
$ minikube ip<br />
192.168.99.100<br />
$ kubectl get svc/web-service -o json | jq '.spec.ports[].nodePort'<br />
32610<br />
# OR<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
$ curl <nowiki>http://192.168.99.100:32610</nowiki><br />
Christoph testing<br />
<br />
; Example #2 - Using NFS<br />
<br />
* First, create a server to host your NFS server (e.g., <code>`sudo apt-get install -y nfs-kernel-server`</code>).<br />
* On your NFS server, do the following:<br />
$ mkdir -p /var/nfs/general<br />
$ cat << EOF >>/etc/exports<br />
/var/nfs/general 10.100.1.2(rw,sync,no_subtree_check) 10.100.1.3(rw,sync,no_subtree_check) 10.100.1.4(rw,sync,no_subtree_check)<br />
EOF<br />
where the <code>10.x</code> IPs are the private IPs of your k8s nodes (both Master and Worker nodes).<br />
* Make sure to install <code>nfs-common</code> on each of the k8s nodes that will be connecting to the NFS server.<br />
<br />
Now, on the k8s Master node, create a Persistent Volume (PV) and Persistent Volume Claim (PVC):<br />
<br />
* Create a Persistent Volume (PV):<br />
$ cat << EOF >pv.yml<br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 10.100.1.10 # NFS Server's private IP<br />
readOnly: false<br />
EOF<br />
$ kubectl create --validate -f pv.yml<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Available<br />
* Create a Persistent Volume Claim (PVC):<br />
$ cat << EOF >pvc.yml<br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
EOF<br />
$ kubectl create --validate -f pvc.yml<br />
$ kubectl get pvc<br />
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE<br />
nfs-pvc Bound mypv 1Gi RWX<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Bound default/nfs-pvc 11m<br />
<br />
* Create a Pod:<br />
$ cat << EOF >nfs-pod.yml <br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nfs-pod<br />
labels:<br />
name: nfs-pod<br />
spec:<br />
containers:<br />
- name: nfs-ctn<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
volumeMounts:<br />
- name: nfsvol<br />
mountPath: /tmp<br />
restartPolicy: Always<br />
securityContext:<br />
fsGroup: 65534<br />
runAsUser: 65534<br />
volumes:<br />
- name: nfsvol<br />
persistentVolumeClaim:<br />
claimName: nfs-pvc<br />
EOF<br />
$ kubectl create --validate -f nfs-pod.yml<br />
$ kubectl get pods -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE<br />
busybox 1/1 Running 9 2d 10.244.2.22 k8s.worker01.local<br />
<br />
* Get a shell from the <code>nfs-pod</code> Pod:<br />
$ kubectl exec -it nfs-pod -- sh<br />
/ $ df -h<br />
Filesystem Size Used Available Use% Mounted on<br />
172.31.119.58:/var/nfs/general<br />
19.3G 1.8G 17.5G 9% /tmp<br />
...<br />
/ $ touch /tmp/this-is-from-the-pod<br />
<br />
* On the NFS server:<br />
$ ls -l /var/nfs/general/<br />
total 0<br />
-rw-r--r-- 1 nobody nogroup 0 Jan 18 23:32 this-is-from-the-pod<br />
<br />
It works!<br />
<br />
==ConfigMaps and Secrets==<br />
While deploying an application, we may need to pass such runtime parameters like configuration details, passwords, etc. For example, let's assume we need to deploy ten different applications for our customers, and, for each customer, we just need to change the name of the company in the UI. Instead of creating ten different Docker images for each customer, we can just use the template image and pass the customers' names as a runtime parameter. In such cases, we can use the ConfigMap API resource. Similarly, when we want to pass sensitive information, we can use the Secret API resource. Think ''Secrets'' (for confidential data) and ''ConfigMaps'' (for non-confidential data).<br />
<br />
[https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/ ConfigMaps] allow you to decouple configuration artifacts from image content to keep containerized applications portable. Using ConfigMaps, we can pass configuration details as key-value pairs, which can be later consumed by Pods or any other system components, such as controllers. We can create ConfigMaps in two ways:<br />
<br />
* From literal values; and<br />
* From files.<br />
<br />
<br />
;ConfigMaps<br />
<br />
* Create a ConfigMap:<br />
$ kubectl create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2<br />
configmap "my-config" created<br />
$ kubectl get configmaps my-config -o yaml<br />
<pre><br />
apiVersion: v1<br />
data:<br />
key1: value1<br />
key2: value2<br />
kind: ConfigMap<br />
metadata:<br />
creationTimestamp: 2018-01-11T23:57:44Z<br />
name: my-config<br />
namespace: default<br />
resourceVersion: "117110"<br />
selfLink: /api/v1/namespaces/default/configmaps/my-config<br />
uid: 37a43e39-f72b-11e7-8370-08002721601f<br />
</pre><br />
$ kubectl describe configmap/my-config<br />
<pre><br />
Name: my-config<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Data<br />
====<br />
key2:<br />
----<br />
value2<br />
key1:<br />
----<br />
value1<br />
Events: <none><br />
</pre><br />
<br />
; Create a ConfigMap from a configuration file<br />
<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: ConfigMap<br />
metadata:<br />
name: customer1<br />
data:<br />
TEXT1: Customer1_Company<br />
TEXT2: Welcomes You<br />
COMPANY: Customer1 Company Technology, LLC.<br />
EOF<br />
</pre><br />
<br />
We can get the values of the given key as environment variables inside a Pod. In the following example, while creating the Deployment, we are assigning values for environment variables from the customer1 ConfigMap:<br />
<pre><br />
....<br />
containers:<br />
- name: my-app<br />
image: foobar<br />
env:<br />
- name: MONGODB_HOST<br />
value: mongodb<br />
- name: TEXT1<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT1<br />
- name: TEXT2<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT2<br />
- name: COMPANY<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: COMPANY<br />
....<br />
</pre><br />
With the above, we will get the <code>TEXT1</code> environment variable set to <code>Customer1_Company</code>, <code>TEXT2</code> environment variable set to <code>Welcomes You</code>, and so on.<br />
<br />
We can also mount a ConfigMap as a Volume inside a Pod. For each key, we will see a file in the mount path and the content of that file become the respective key's value. For details, see [https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#adding-configmap-data-to-a-volume here].<br />
<br />
You can also use ConfigMaps to configure your cluster to use, as an example, 8.8.8.8 and 8.8.4.4 as its upstream DNS server:<br />
<pre><br />
kind: ConfigMap<br />
apiVersion: v1<br />
metadata:<br />
name: kube-dns<br />
namespace: kube-system<br />
data:<br />
upstreamNameservers: |<br />
["8.8.8.8", "8.8.4.4"]<br />
</pre><br />
<br />
; Secrets<br />
<br />
Objects of type [https://kubernetes.io/docs/concepts/configuration/secret/ Secret] are intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a Secret is safer and more flexible than putting it verbatim in a pod definition or in a docker image.<br />
<br />
As an example, assume that we have a Wordpress blog application, in which our <code>wordpress</code> frontend connects to the [[MySQL]] database backend using a password. While creating the Deployment for <code>wordpress</code>, we can put the MySQL password in the Deployment's YAML file, but the password would not be protected. The password would be available to anyone who has access to the configuration file.<br />
<br />
In situations such as the one we just mentioned, the Secret object can help. With Secrets, we can share sensitive information like passwords, tokens, or keys in the form of key-value pairs, similar to ConfigMaps; thus, we can control how the information in a Secret is used, reducing the risk for accidental exposures. In Deployments or other system components, the Secret object is ''referenced'', without exposing its content.<br />
<br />
It is important to keep in mind that the Secret data is stored as plain text inside etcd. Administrators must limit the access to the API Server and etcd.<br />
<br />
To create a Secret using the <code>`kubectl create secret`</code> command, we need to first create a file with a password, and then pass it as an argument.<br />
<br />
* Create a file with your MySQL password:<br />
$ echo mysqlpasswd | tr -d '\n' > password.txt<br />
<br />
* Create the ''Secret'':<br />
$ kubectl create secret generic mysql-passwd --from-file=password.txt<br />
$ kubectl describe secret/mysql-passwd<br />
<pre><br />
Name: mysql-passwd<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Type: Opaque<br />
<br />
Data<br />
====<br />
password.txt: 11 bytes<br />
</pre><br />
<br />
We can also create a Secret manually, using the YAML configuration file. With Secrets, each object data must be encoded using base64. If we want to have a configuration file for our Secret, we must first get the base64 encoding for our password:<br />
<br />
$ cat password.txt | base64<br />
bXlzcWxwYXNzd2Q==<br />
<br />
and then use it in the configuration file:<br />
<pre><br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysql-passwd<br />
type: Opaque<br />
data:<br />
password: bXlzcWxwYXNzd2Q=<br />
</pre><br />
Note that base64 encoding does not do any encryption and anyone can easily decode it:<br />
<br />
$ echo "bXlzcWxwYXNzd2Q=" | base64 -d # => mysqlpasswd<br />
<br />
Therefore, make sure you do not commit a Secret's configuration file in the source code.<br />
<br />
We can get Secrets to be used by containers in a Pod by mounting them as data volumes, or by exposing them as environment variables.<br />
<br />
We can reference a Secret and assign the value of its key as an environment variable (<code>WORDPRESS_DB_PASSWORD</code>):<br />
<pre><br />
.....<br />
spec:<br />
containers:<br />
- image: wordpress:4.7.3-apache<br />
name: wordpress<br />
env:<br />
- name: WORDPRESS_DB_HOST<br />
value: wordpress-mysql<br />
- name: WORDPRESS_DB_PASSWORD<br />
valueFrom:<br />
secretKeyRef:<br />
name: my-password<br />
key: password.txt<br />
.....<br />
</pre><br />
<br />
Or, we can also mount a Secret as a Volume inside a Pod. A file would be created for each key mentioned in the Secret, whose content would be the respective value. See [https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod here] for details.<br />
<br />
==Ingress==<br />
Among the ServiceTypes mentioned earlier, NodePort and LoadBalancer are the most often used. For the LoadBalancer ServiceType, we need to have the support from the underlying infrastructure. Even after having the support, we may not want to use it for every Service, as LoadBalancer resources are limited and they can increase costs significantly. Managing the NodePort ServiceType can also be tricky at times, as we need to keep updating our proxy settings and keep track of the assigned ports. In this section, we will explore the Ingress API object, which is another method we can use to access our applications from the external world.<br />
<br />
An ''[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingress]'' is a collection of rules that allow inbound connections to reach the cluster Services. With Services, routing rules are attached to a given Service. They exist for as long as the Service exists. If we can somehow decouple the routing rules from the application, we can then update our application without worrying about its external access. This can be done using the Ingress resource. Ingress can provide load balancing, SSL/TLS termination, and name-based virtual hosting and/or routing.<br />
<br />
To allow the inbound connection to reach the cluster Services, Ingress configures a Layer 7 HTTP load balancer for Services and provides the following:<br />
<br />
* TLS (Transport Layer Security)<br />
* Name-based virtual hosting <br />
* Path-based routing<br />
* Custom rules.<br />
<br />
With Ingress, users do not connect directly to a Service. Users reach the Ingress endpoint, and, from there, the request is forwarded to the respective Service. You can see an example of an example Ingress definition below:<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: web-ingress<br />
spec:<br />
rules:<br />
- host: blue.example.com<br />
http:<br />
paths:<br />
- backend: <br />
serviceName: blue-service<br />
servicePort: 80<br />
- host: green.example.com<br />
http:<br />
paths:<br />
- backend:<br />
serviceName: green-service<br />
servicePort: 80<br />
</pre><br />
<br />
According to the example just provided, users requests to both <code>blue.example.com</code> and <code>green.example.com</code> would go to the same Ingress endpoint, and, from there, they would be forwarded to <code>blue-service</code>, and <code>green-service</code>, respectively. Here, we have seen an example of a Name-Based Virtual Hosting Ingress rule. <br />
<br />
We can also have Fan Out Ingress rules, in which we send requests like <code>example.com/blue</code> and <code>example.com/green</code>, which would be forwarded to <code>blue-service</code> and <code>green-service</code>, respectively.<br />
<br />
To secure an Ingress, you must create a ''Secret''. The TLS secret must contain keys named <code>tls.crt</code> and <code>tls.key</code>, which contain the certificate and private key to use for TLS.<br />
<br />
The Ingress resource does not do any request forwarding by itself. All of the magic is done using the ''Ingress Controller''.<br />
<br />
; Ingress Controller<br />
<br />
An Ingress Controller is an application which watches the Master Node's API Server for changes in the Ingress resources and updates the Layer 7 load balancer accordingly. Kubernetes has different Ingress Controllers, and, if needed, we can also build our own. GCE L7 Load Balancer and Nginx Ingress Controller are examples of Ingress Controllers.<br />
<br />
Minikube v0.14.0 and above ships the Nginx Ingress Controller setup as an add-on. It can be easily enabled by running the following command:<br />
<br />
$ minikube addons enable ingress<br />
<br />
Once the Ingress Controller is deployed, we can create an Ingress resource using the <code>kubectl create</code> command. For example, if we create an <code>example-ingress.yml</code> file with the content above, then, we can use the following command to create an Ingress resource:<br />
<br />
$ kubectl create -f example-ingress.yml<br />
<br />
With the Ingress resource we just created, we should now be able to access the blue-service or green-service services using blue.example.com and green.example.com URLs. As our current setup is on minikube, we will need to update the host configuration file on our workstation to the minikube's IP for those URLs:<br />
<br />
$ cat /etc/hosts<br />
127.0.0.1 localhost<br />
::1 localhost<br />
192.168.99.100 blue.example.com green.example.com <br />
<br />
Once this is done, we can now open blue.example.com and green.example.com in a browser and access the application.<br />
<br />
==Labels and Selectors==<br />
''[https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ Labels]'' are key-value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key-value labels defined. Each key must be unique for a given object.<br />
<pre><br />
"labels": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
<br />
;Syntax and character set<br />
<br />
Labels are key-value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (<code>/</code>). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (<code>.</code>), not longer than 253 characters in total, followed by a slash (<code>/</code>). If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. kube-scheduler, kube-controller-manager, kube-apiserver, kubectl, or other third-party automation) which add labels to end-user objects must specify a prefix. The <code>kubernetes.io/</code> prefix is reserved for Kubernetes core components.<br />
<br />
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between.<br />
<br />
;Label selectors<br />
<br />
Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).<br />
<br />
Via a label selector, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.<br />
<br />
The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical AND (<code>&&</code>) operator.<br />
<br />
An empty label selector (that is, one with zero requirements) selects every object in the collection.<br />
<br />
A null label selector (which is only possible for optional selector fields) selects no objects.<br />
<br />
Note: the label selectors of two controllers must not overlap within a namespace, otherwise they will fight with each other.<br />
Note that labels are not restricted to pods. You can apply them to all sorts of objects, such as nodes or services.<br />
<br />
;Examples<br />
<br />
* Label a given node:<br />
$ kubectl label node k8s.worker1.local network=gigabit<br />
<br />
* With ''Equality-based'', one may write:<br />
$ kubectl get pods -l environment=production,tier=frontend<br />
<br />
* Using ''set-based'' requirements:<br />
$ kubectl get pods -l 'environment in (production),tier in (frontend)'<br />
<br />
* Implement the OR operator on values:<br />
$ kubectl get pods -l 'environment in (production, qa)'<br />
<br />
* Restricting negative matching via exists operator:<br />
$ kubectl get pods -l 'environment,environment notin (frontend)'<br />
<br />
* Show the current labels on your pods:<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Add a label to an already running/existing pod:<br />
$ kubectl label pods busybox owner=christoph<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d owner=christoph<br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Select a pod by its label:<br />
$ kubectl get pods --selector owner=christoph<br />
#~OR~<br />
$ kubectl get pods -l owner=christoph<br />
NAME READY STATUS RESTARTS AGE<br />
busybox 1/1 Running 25 9d<br />
<br />
* Delete/remove a given label from a given pod:<br />
$ kubectl label pod busybox owner-<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
<br />
* Get all pods that belong to both the <code>production</code> ''and'' the <code>development</code> environments:<br />
$ kubectl get pods -l 'env in (production, development)'<br />
<br />
; Using Labels to select a Node on which to schedule a Pod:<br />
<br />
* Label a Node that uses SSDs as its primary HDD:<br />
$ kubectl label node k8s.worker1.local hdd=ssd<br />
<br />
<pre><br />
$ cat << EOF >busybox.yml<br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "300"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
nodeSelector: <br />
hdd: ssd<br />
EOF<br />
</pre><br />
<br />
==Annotations==<br />
With ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ Annotations]'', we can attach arbitrary, non-identifying metadata to objects, in a key-value format:<br />
<br />
<pre><br />
"annotations": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.<br />
<br />
In contrast to Labels, annotations are not used to identify and select objects. Annotations can be used to:<br />
<br />
* Store build/release IDs, which git branch, etc.<br />
* Phone numbers of persons responsible or directory entries specifying where such information can be found<br />
* Pointers to logging, monitoring, analytics, audit repositories, debugging tools, etc.<br />
* Etc.<br />
<br />
For example, while creating a Deployment, we can add a description like the one below:<br />
<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
annotations:<br />
description: Deployment based PoC dates 12 January 2018<br />
....<br />
....<br />
</pre><br />
<br />
We can look at annotations while describing an object:<br />
<br />
<pre><br />
$ kubectl describe deployment webserver<br />
Name: webserver<br />
Namespace: default<br />
CreationTimestamp: Fri, 12 Jan 2018 13:18:23 -0800<br />
Labels: app=webserver<br />
Annotations: deployment.kubernetes.io/revision=1<br />
description=Deployment based PoC dates 12 January 2018<br />
...<br />
...<br />
</pre><br />
<br />
==Jobs and CronJobs==<br />
<br />
===Jobs===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#what-is-a-job Job]'' creates one or more pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the Job itself is complete. Deleting a Job will cleanup the pods it created.<br />
<br />
A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).<br />
<br />
A Job can also be used to run multiple Pods in parallel.<br />
<br />
; Example<br />
<br />
* Below is an example ''Job'' config. It computes π to 2000 places and prints it out. It takes around 10 seconds to complete.<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
$ kubctl create -f ./job-pi.yml<br />
job "pi" created<br />
$ kubectl describe jobs/pi<br />
<pre><br />
Name: pi<br />
Namespace: default<br />
Selector: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Annotations: <none><br />
Parallelism: 1<br />
Completions: 1<br />
Start Time: Fri, 12 Jan 2018 13:25:23 -0800<br />
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Containers:<br />
pi:<br />
Image: perl<br />
Port: <none><br />
Command:<br />
perl<br />
-Mbignum=bpi<br />
-wle<br />
print bpi(2000)<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 8s job-controller Created pod: pi-rfvvw<br />
</pre><br />
<br />
* Get the result of the Job run (i.e., the value of π):<br />
$ pods=$(kubectl get pods --show-all --selector=job-name=pi --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
pi-rfvvw<br />
$ kubectl logs ${pods}<br />
3.1415926535897932384626433832795028841971693...<br />
<br />
===CronJobs===<br />
<br />
Support for creating ''Jobs'' at specified times/dates (i.e. cron) is available in Kubernetes 1.4. See [https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ here] for details.<br />
<br />
Below is an example ''CronJob''. Every minute, it runs a simple job to print current time and then echo a "hello" string:<br />
$ cat << EOF >cronjob.yml<br />
apiVersion: batch/v1beta1<br />
kind: CronJob<br />
metadata:<br />
name: hello<br />
spec:<br />
schedule: "*/1 * * * *"<br />
jobTemplate:<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: hello<br />
image: busybox<br />
args:<br />
- /bin/sh<br />
- -c<br />
- date; echo Hello from the Kubernetes cluster<br />
restartPolicy: OnFailure<br />
EOF<br />
<br />
$ kubectl create -f cronjob.yml<br />
cronjob "hello" created<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 <none> 11s<br />
<br />
$ kubectl get jobs --watch<br />
NAME DESIRED SUCCESSFUL AGE<br />
hello-1515793140 1 1 7s<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 22s 48s<br />
<br />
$ pods=$(kubectl get pods -a --selector=job-name=hello-1515793140 --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
hello-1515793140-plp8g<br />
<br />
$ kubectl logs $pods<br />
Fri Jan 12 21:39:07 UTC 2018<br />
Hello from the Kubernetes cluster<br />
<br />
* Cleanup<br />
$ kubectl delete cronjob hello<br />
<br />
==Quota Management==<br />
When there are many users sharing a given Kubernetes cluster, there is always a concern for fair usage. To address this concern, administrators can use the ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ ResourceQuota]'' object, which provides constraints that limit aggregate resource consumption per Namespace.<br />
<br />
We can have the following types of quotas per Namespace:<br />
<br />
* Compute Resource Quota: We can limit the total sum of compute resources (CPU, memory, etc.) that can be requested in a given Namespace.<br />
* Storage Resource Quota: We can limit the total sum of storage resources (PersistentVolumeClaims, requests.storage, etc.) that can be requested.<br />
* Object Count Quota: We can restrict the number of objects of a given type (pods, ConfigMaps, PersistentVolumeClaims, ReplicationControllers, Services, Secrets, etc.).<br />
<br />
==Daemon Sets==<br />
In some cases, like collecting monitoring data from all nodes, or running a storage daemon on all nodes, etc., we need a specific type of Pod running on all nodes at all times. A ''[https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ DaemonSet]'' is the object that allows us to do just that. <br />
<br />
Whenever a node is added to the cluster, a Pod from a given DaemonSet is created on it. When the node dies, the respective Pods are garbage collected. If a DaemonSet is deleted, all Pods it created are deleted as well.<br />
<br />
Example DaemonSet:<br />
<pre><br />
kind: DaemonSet<br />
apiVersion: apps/v1<br />
metadata:<br />
name: pause-ds<br />
spec:<br />
selector:<br />
matchLabels:<br />
quiet: "pod"<br />
template:<br />
metadata:<br />
labels:<br />
quiet: pod<br />
spec:<br />
tolerations:<br />
- key: node-role.kubernetes.io/master<br />
effect: NoSchedule<br />
containers:<br />
- name: pause-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Stateful Sets==<br />
The ''[https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ StatefulSet]'' controller is used for applications which require a unique identity, such as name, network identifications, strict ordering, etc. For example, MySQL cluster, etcd cluster.<br />
<br />
The StatefulSet controller provides identity and guaranteed ordering of deployment and scaling to Pods.<br />
<br />
Note: Before Kubernetes 1.5, the StatefulSet controller was referred to as ''PetSet''.<br />
<br />
==Role Based Access Control (RBAC)==<br />
''[https://kubernetes.io/docs/admin/authorization/rbac/ Role-based access control]'' (RBAC) is an authorization mechanism for managing permissions around Kubernetes resources.<br />
<br />
Using the RBAC API, we define a role which contains a set of additive permissions. Within a Namespace, a role is defined using the Role object. For a cluster-wide role, we need to use the ClusterRole object.<br />
<br />
Once the roles are defined, we can bind them to a user or a set of users using ''RoleBinding'' and ''ClusterRoleBinding''.<br />
<br />
===Using RBAC with minikube===<br />
<br />
* Start up minikube with RBAC support:<br />
$ minikube start --kubernetes-version=v1.9.0 --extra-config=apiserver.Authorization.Mode=RBAC<br />
<br />
* Setup RBAC:<br />
<pre><br />
$ cat rbac-cluster-role-binding.yml<br />
# kubectl create clusterrolebinding add-on-cluster-admin \<br />
# --clusterrole=cluster-admin --serviceaccount=kube-system:default<br />
#<br />
kind: ClusterRoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1alpha1<br />
metadata:<br />
name: kube-system-sa<br />
subjects:<br />
- kind: Group<br />
name: system:sericeaccounts:kube-system<br />
roleRef:<br />
kind: ClusterRole<br />
name: cluster-admin<br />
apiGroup: rbac.authorization.k8s.io<br />
</pre><br />
<br />
<pre><br />
$ cat rbac-setup.yml <br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: viewer<br />
namespace: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: admin<br />
namespace: rbac<br />
</pre><br />
<br />
* Create a Role Binding:<br />
<pre><br />
# kubectl create rolebinding reader-binding \<br />
# --clusterrole=reader \<br />
# --user=serviceaccount:reader \<br />
# --namespace:rbac<br />
#<br />
kind: RoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: rbac<br />
name: reader-binding<br />
roleRef:<br />
apiGroup: rbac.authorization.k8s.io<br />
kind: Role<br />
name: reader<br />
subjects:<br />
- apiGroup: rbac.authorization.k8s.io<br />
kind: ServiceAccount<br />
name: reader<br />
</pre><br />
<br />
* Create a Role:<br />
<pre><br />
$ cat rbac-role.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: default<br />
name: reader<br />
rules:<br />
- apiGroups: [""]<br />
resources: ["*"]<br />
verbs: ["get", "watch", "list"]<br />
</pre><br />
<br />
* Create an RBAC "core reader" Role with specific resources and "verbs" (i.e., the "core reader" role can "get"/"list"/etc. on specific resources (e.g., Pods, Jobs, Deployments, etc.):<br />
<pre><br />
$ cat rbac-role-core-reader.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: core-reader<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods<br />
- configmaps<br />
- secrets<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- jobs<br />
- deployments<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
</pre><br />
<br />
* "Gotchas":<br />
<pre><br />
$ cat rbac-gotcha-1.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-1<br />
rules:<br />
- nonResourceURLs:<br />
- /healthz<br />
verbs:<br />
- get<br />
- post<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- deployments<br />
verbs:<br />
- "*"<br />
</pre><br />
<pre><br />
$ cat rbac-gotcha-2.yml <br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-2<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- secrets<br />
verbs:<br />
- "*"<br />
resourceNames:<br />
- "my_secret"<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods/logs<br />
verbs:<br />
- "get"<br />
</pre><br />
<br />
; Privilege escalation<br />
* You cannot create a Role or ClusterRole that grants permissions you do not have.<br />
* You cannot create a RoleBinding or ClusterRoleBinding that binds to a Role with permissions you do not have (unless you have been explicitly given "bind" permission on the role).<br />
<br />
* Grant explicit bind access:<br />
<pre><br />
kind: ClusterRole<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: role-grantor<br />
rules:<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["rolebindings"]<br />
verbs: ["create"]<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["clusterroles"]<br />
verbs: ["bind"]<br />
resourceNames: ["admin", "edit", "view"]<br />
</pre><br />
<br />
===Testing RBAC permissions===<br />
<br />
* Example of RBAC not allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
no - Required "container.pods.create" permission.<br />
</pre><br />
<br />
* Example of RBAC allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
yes<br />
</pre><br />
<br />
* A more complex example:<br />
<pre><br />
$ kubectl auth can-i update deployments.apps \<br />
--subresource="scale" --as-group="$group" --as="$user" -n $ns<br />
</pre><br />
<br />
==Federation==<br />
With the ''[https://kubernetes.io/docs/concepts/cluster-administration/federation/ Kubernetes Cluster Federation]'' we can manage multiple Kubernetes clusters from a single control plane. We can sync resources across the clusters, and have cross cluster discovery. This allows us to do Deployments across regions and access them using a global DNS record.<br />
<br />
Federation is very useful when we want to build a hybrid solution, in which we can have one cluster running inside our private datacenter and another one on the public cloud. We can also assign weights for each cluster in the Federation, to distribute the load as per our choice.<br />
<br />
==Helm==<br />
To deploy an application, we use different Kubernetes manifests, such as Deployments, Services, Volume Claims, Ingress, etc. Sometimes, it can be tiresome to deploy them one by one. We can bundle all those manifests after templatizing them into a well-defined format, along with other metadata. Such a bundle is referred to as ''Chart''. These Charts can then be served via repositories, such as those that we have for rpm and deb packages. <br />
<br />
''[https://github.com/kubernetes/helm Helm]'' is a package manager (analogous to yum and apt) for Kubernetes, which can install/update/delete those Charts in the Kubernetes cluster.<br />
<br />
Helm has two components:<br />
<br />
* A client called helm, which runs on your user's workstation; and<br />
* A server called tiller, which runs inside your Kubernetes cluster.<br />
<br />
The client helm connects to the server tiller to manage Charts. Charts submitted for Kubernetes are available [https://github.com/kubernetes/charts here].<br />
<br />
==Monitoring and logging==<br />
In Kubernetes, we have to collect resource usage data by Pods, Services, nodes, etc, to understand the overall resource consumption and to take decisions for scaling a given application. Two popular Kubernetes monitoring solutions are Heapster and Prometheus.<br />
<br />
[https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/ Heapster] is a cluster-wide aggregator of monitoring and event data, which is natively supported on Kubernetes. <br />
<br />
[https://prometheus.io/ Prometheus], now part of [https://www.cncf.io/ CNCF] (Cloud Native Computing Foundation), can also be used to scrape the resource usage from different Kubernetes components and objects. Using its client libraries, we can also instrument the code of our application.<br />
<br />
Another important aspect for troubleshooting and debugging is Logging, in which we collect the logs from different components of a given system. In Kubernetes, we can collect logs from different cluster components, objects, nodes, etc. The most common way to collect the logs is using [https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/ Elasticsearch], which uses [https://www.fluentd.org/ fluentd] with custom configuration as an agent on the nodes. fluentd is an open source data collector, which is also part of CNCF.<br />
<br />
[https://github.com/google/cadvisor cAdvisor] is an open source container resource usage and performance analysis agent. It auto-discovers all containers on a node and collects CPU, memory, file system, and network usage statistics. It provides overall machine usage by analyzing the "root" container on the machine. It exposes a simple UI for local containers on port 4194.<br />
<br />
==Security==<br />
===Configure network policies===<br />
A ''[https://kubernetes.io/docs/concepts/services-networking/network-policies/ Network Policy]'' is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.<br />
<br />
''NetworkPolicy'' resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.<br />
<br />
* Specification of how groups of pods may communicate<br />
* Use labels to select pods and define rules<br />
* Implemented by the network plugin<br />
* Pods are non-isolated by default<br />
* Pods are isolated when a Network Policy selects them<br />
<br />
;Example NetworkPolicy<br />
Create a "default" isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods:<br />
<pre><br />
apiVersion: networking.k8s.io/v1<br />
kind: NetworkPolicy<br />
metadata:<br />
name: default-deny<br />
spec:<br />
podSelector: {}<br />
policyTypes:<br />
- Ingress<br />
</pre><br />
<br />
===TLS certificates for cluster components===<br />
Get [https://github.com/OpenVPN/easy-rsa easy-rsa].<br />
<br />
$ ./easyrsa init-pki<br />
$ MASTER_IP=10.100.1.2<br />
$ ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass<br />
<br />
$ cat rsa-request.sh<br />
<pre><br />
#!/bin/bash<br />
./easyrsa --subject-alt-name="IP:${MASTER_IP}," \<br />
"DNS:kubernetes," \<br />
"DNS:kubernetes.default," \<br />
"DNS:kubernetes.default.svc," \<br />
"DNS:kubernetes.default.svc.cluster," \<br />
"DNS:kubernetes.default.svc.cluster.local" \<br />
--days=10000 \<br />
build-server-full server nopass<br />
</pre><br />
<br />
<pre><br />
pki/<br />
├── ca.crt<br />
├── certs_by_serial<br />
│ └── F3A6F7D34BC84330E7375FA20C8441DF.pem<br />
├── index.txt<br />
├── index.txt.attr<br />
├── index.txt.old<br />
├── issued<br />
│ └── server.crt<br />
├── private<br />
│ ├── ca.key<br />
│ └── server.key<br />
├── reqs<br />
│ └── server.req<br />
├── serial<br />
└── serial.old<br />
</pre><br />
<br />
* Figure out what are the paths of the old TLS certs/keys with the following command:<br />
<pre><br />
$ ps aux | grep [a]piserver | sed -n -e 's/^.*\(kube-apiserver \)/\1/p' | tr ' ' '\n'<br />
kube-apiserver<br />
--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota<br />
--requestheader-extra-headers-prefix=X-Remote-Extra-<br />
--advertise-address=172.31.118.138<br />
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt<br />
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt<br />
--requestheader-username-headers=X-Remote-User<br />
--service-cluster-ip-range=10.96.0.0/12<br />
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key<br />
--secure-port=6443<br />
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key<br />
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname<br />
--requestheader-group-headers=X-Remote-Group<br />
--requestheader-allowed-names=front-proxy-client<br />
--service-account-key-file=/etc/kubernetes/pki/sa.pub<br />
--insecure-port=0<br />
--enable-bootstrap-token-auth=true<br />
--allow-privileged=true<br />
--client-ca-file=/etc/kubernetes/pki/ca.crt<br />
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt<br />
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key<br />
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt<br />
--authorization-mode=Node,RBAC<br />
--etcd-servers=http://127.0.0.1:2379<br />
</pre><br />
<br />
===Security Contexts===<br />
A ''[https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Security Context]'' defines privilege and access control settings for a Pod or Container. Security context settings include:<br />
<br />
* Discretionary Access Control: Permission to access an object, like a file, is based on user ID (UID) and group ID (GID).<br />
* Security Enhanced Linux (SELinux): Objects are assigned security labels.<br />
* Running as privileged or unprivileged.<br />
* Linux Capabilities: Give a process some privileges, but not all the privileges of the root user.<br />
* AppArmor: Use program profiles to restrict the capabilities of individual programs.<br />
* Seccomp: Limit a process's access to open file descriptors.<br />
* AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This boolean directly controls whether the <code>no_new_privs</code> flag gets set on the container process. <code>AllowPrivilegeEscalation</code> is true always when the container is: 1) run as Privileged; or 2) has <code>CAP_SYS_ADMIN</code>.<br />
<br />
; Example #1<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: security-context-demo<br />
spec:<br />
securityContext:<br />
runAsUser: 1000<br />
fsGroup: 2000<br />
volumes:<br />
- name: sec-ctx-vol<br />
emptyDir: {}<br />
containers:<br />
- name: sec-ctx-demo<br />
image: gcr.io/google-samples/node-hello:1.0<br />
volumeMounts:<br />
- name: sec-ctx-vol<br />
mountPath: /data/demo<br />
securityContext:<br />
allowPrivilegeEscalation: false<br />
</pre><br />
<br />
==Taints and tolerations==<br />
[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature Node affinity] is a property of pods that ''attracts'' them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite – they allow a node to ''repel'' a set of pods.<br />
<br />
[https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ Taints and tolerations] work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks the node such that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.<br />
<br />
==Remove a node from a cluster==<br />
<br />
* On the k8s Master Node:<br />
k8s-master> $ kubectl drain k8s-worker-02 --ignore-daemonsets<br />
<br />
* On the k8s Worker Node (the one you wish to remove from the cluster):<br />
k8s-worker-02> $ kubeadm reset<br />
[preflight] Running pre-flight checks.<br />
[reset] Stopping the kubelet service.<br />
[reset] Unmounting mounted directories in "/var/lib/kubelet"<br />
[reset] Removing kubernetes-managed containers.<br />
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.<br />
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]<br />
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]<br />
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]<br />
<br />
==Networking==<br />
<br />
; Useful network ranges<br />
* Choose ranges for the Pods and Service CIDR blocks<br />
* Generally, any of the RFC-1918 ranges work well<br />
** 10.0.0.0/8<br />
** 172.0.0.0/11<br />
** 192.168.0.0/16<br />
<br />
Every Pod can communicate directly with every other Pod<br />
<br />
;K8s Node<br />
* A general purpose compute that has at least one interface<br />
** The host OS will have a real-world IP for accessing the machine<br />
** K8s Pods are given ''virtual'' interfaces connected to an internal<br />
** Each nodes has a running network stack<br />
* Kube-proxy runs in the OS to control IPtables for:<br />
** Services<br />
** NodePorts<br />
<br />
;Networking substrate<br />
* Most k8s network stacks allocate subnets for each node<br />
** The network stack is responsible for arbitration of subnets and IPs<br />
** The network stack is also responsible for moving packets around the network<br />
* Pods have a unique, routable IP on the Pod CIDR block<br />
** The CIDR block is ''not'' accessed from outside the k8s cluster<br />
** The magic of IPtables allows the Pods to make outgoing connections<br />
* Ensure that k8s has the correct Pods and Service CIDR blocks<br />
<br />
The Pod network is not seen on the physical network (i.e., it is encapsulated; you will not be able to use <code>tcpdump</code> on it from the physical network)<br />
<br />
;Making the setup easier &mdash; CNI<br />
* Use the Container Network Interface (CNI)<br />
* Relieves k8s from having to have a specific network configuration<br />
* It is activated by supplying <code>--network-plugin=cni, --cni-conf-dir, --cni-bin-dir</code> to kubelet<br />
** Typical configuration directory: <code>/etc/cni/net.d</code><br />
** Typical bin directory: <code>/opt/cni/bin</code><br />
* Allows for multiple backends to be used: linux-bridge, macvlan, ipvlan, Open vSwitch, network stacks<br />
<br />
;Kubernetes services<br />
<br />
* Services are crucial for service discovery and distributing traffic to Pods<br />
* Services act as simple internal load balancers with VIPs<br />
** No access controls<br />
** No traffic controls<br />
* IPtables magically route to virtual IPs<br />
* Internally, Services are used as inter-Pod service discovery<br />
** Kube-DNS publishes DNS record (i.e., <code>nginx.default.svc.cluster.local</code>)<br />
* Services can be exposed in three different ways:<br />
*# ClusterIP<br />
*# LoadBalancer<br />
*# NodePort<br />
<br />
; kube-proxy<br />
* Each k8s node in the cluster runs a kube-proxy<br />
* Two modes: userspace and iptables<br />
** iptables is much more performant (userspace should no longer be used<br />
* kube-proxy has the task of configuring iptables to expose each k8s service<br />
** iptables rules distributes traffic randomly across the endpoints<br />
<br />
===Network providers===<br />
<br />
In order for a CNI plugin to be considered a "[https://kubernetes.io/docs/concepts/cluster-administration/networking/ Network Provider]", it must provide (at the very least) the following:<br />
# All containers can communicate with all other containers without NAT<br />
# All nodes can communicate with all containers (and ''vice versa'') without NAT<br />
# The IP that a containers sees itself as is the same IP that others see it as<br />
<br />
==Linux namespaces==<br />
<br />
* Control group (cgroups)<br />
* Union File Systems<br />
<br />
==Kubernetes inbound node port requirements==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Protocol<br />
!Direction<br />
!Port range<br />
!Purpose<br />
!Used by<br />
!Notes<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Master node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 6443<sup>*</sup> || Kubernetes API server || All<br />
|-<br />
| TCP || Inbound || 2379-2380 || etcd server client API || kube-apiserver, etcd<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10251 || kube-scheduler || Self<br />
|-<br />
| TCP || Inbound || 10252 || kube-controller-manager || Self<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Worker node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 30000-32767 || NodePort Services<sup>**</sup> || All<br />
|}<br />
</div><br />
<br clear="all"/><br />
<sup>**</sup> Default port range for NodePort Services.<br />
<br />
Any port numbers marked with <sup>*</sup> are overridable, so you will need to ensure any custom ports you provide are also open.<br />
<br />
Although etcd ports are included in master nodes, you can also host your own etcd cluster externally or on custom ports.<br />
<br />
The pod network plugin you use (see below) may also require certain ports to be open. Since this differs with each pod network plugin, please see the documentation for the plugins about what port(s) those need.<br />
<br />
==API versions==<br />
<br />
Below is a table showing which value to use for the <code>apiVersion</code> key for a given k8s primitive (note: all values are for k8s 1.8.0, unless otherwise specified):<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Primitive<br />
!apiVersion<br />
|-<br />
| Pod || v1<br />
|-<br />
| Deployment || apps/v1beta2<br />
|-<br />
| Service || v1<br />
|-<br />
| Job || batch/v1<br />
|-<br />
| Ingress || extensions/v1beta1<br />
|-<br />
| CronJob || batch/v1beta1<br />
|-<br />
| ConfigMap || v1<br />
|-<br />
| DaemonSet || apps/v1<br />
|-<br />
| ReplicaSet || apps/v1beta2<br />
|-<br />
| NetworkPolicy || networking.k8s.io/v1<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
You can get a list of all of the API versions supported by your k8s install with:<br />
$ kubectl api-versions<br />
<br />
==Troubleshooting==<br />
<br />
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns<br />
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
* If your container has previously crashed, you can access the previous container’s crash log with:<br />
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}<br />
<br />
==Miscellaneous commands==<br />
<br />
* Simple workflow (not a best practice; use manifest files {YAML} instead):<br />
$ kubectl run nginx --image=nginx:1.10.0<br />
$ kubectl expose deployment nginx --port 80 --type LoadBalancer<br />
$ kubectl get services # <- wait until public IP is assigned<br />
$ kubectl scale deployment nginx --replicas 3<br />
<br />
* Create an Nginx deployment with three replicas without using YAML:<br />
$ kubectl run nginx --image=nginx --replicas=3<br />
<br />
* Take a node out of service for maintenance:<br />
$ kubectl cordon k8s.worker1.local<br />
$ kubectl drain k8s.worker1.local --ignore-daemonsets<br />
<br />
* Return a given node to a service after cordoning and "draining" it (e.g., after a maintenance):<br />
$ kubectl uncordon k8s.worker1.local<br />
<br />
* Get a list of nodes in a format useful for scripting:<br />
$ kubectl get nodes -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get nodes -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get nodes -o json | jq -crM '.items[].metadata.name'<br />
#~OR~ (if using an older version of `jq`)<br />
$ kubectl get nodes -o json | jq '.items[].metadata.name' | tr -d '"'<br />
<br />
* Label a list of nodes:<br />
<pre><br />
for node in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do<br />
kubectl label nodes ${node} instancetype=ondemand;<br />
kubectl label nodes ${node} "example.io/node-lifecycle"=od;<br />
done<br />
</pre><br />
<br />
* Delete a bunch of Pods in "Evicted" state:<br />
$ kubectl get pod -n develop | awk '/Evicted/{print $1}' | xargs kubectl delete pod -n develop<br />
#~OR~<br />
$ kubectl get po -a --all-namespaces -o json | \<br />
jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | <br />
"kubectl delete po \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c<br />
<br />
* Get a random node:<br />
$ NODES=($(kubectl get nodes -o json | jq -crM '.items[].metadata.name'))<br />
$ NUMNODES=${#NODES[@]}<br />
$ echo ${NODES[$[ $RANDOM % $NUMNODES ]]}<br />
<br />
* Get all recent events sorted by their timestamps:<br />
$ kubectl get events --sort-by='.metadata.creationTimestamp'<br />
<br />
* Get a list of all Pods in the default namespace sorted by Node:<br />
$ kubectl get po -o wide --sort-by=.spec.nodeName<br />
<br />
* Get the cluster IP for a service named "foo":<br />
$ kubectl get svc/foo -o jsonpath='{.spec.clusterIP}'<br />
<br />
* List all Services in a cluster and their node ports:<br />
$ kubectl get --all-namespaces svc -o json |\<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Print just the Pod names of those Pods with the label <code>app=nginx</code>:<br />
$ kubectl get --no-headers=true pods -l app=nginx -o custom-columns=:metadata.name<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get --no-headers=true pods -l app=nginx -o name | awk -F "/" '{print $2}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o json | jq -crM '.items [] | .metadata.name'<br />
<br />
* Get a list of all container images used by the Pods in your default namespace:<br />
$ kubectl get pods -o go-template --template='<nowiki>{{range .items}}{{racontainers}}{{.image}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get pods -o go-template="<nowiki>{{range .items}}{{range .spec.containers}}{{.image}}|{{end}}{{end}}</nowiki>" | tr '|' '\n'<br />
<br />
* Get a list of Pods sorted by Node name:<br />
$ kubectl get po -o json | jq -r '.items | sort_by(.spec.nodeName)[] | [.spec.nodeName,.metadata.name] | @tsv'<br />
<br />
* List all Services in a cluster with their endpoints:<br />
$ kubectl get --all-namespaces svc -o json | \<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Get status transitions of each Pod in the default namespace:<br />
$ export tpl='{range .items[*]}{"\n"}{@.metadata.name}{range @.status.conditions[*]}{"\t"}{@.type}={@.status}{end}{end}'<br />
$ kubectl get po -o jsonpath="${tpl}" && echo<br />
<br />
cheddar-cheese-d6d6587c7-4bgcz Initialized=True Ready=True PodScheduled=True<br />
echoserver-55f97d5bff-pdv65 Initialized=True Ready=True PodScheduled=True<br />
stilton-cheese-6d64cbc79-g7h4w Initialized=True Ready=True PodScheduled=True<br />
<br />
* Get a list of all Pods in status "Failed":<br />
$ kubectl get pods -o go-template='<nowiki>{{range .items}}{{if eq .status.phase "Failed"}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get all users in all namespaces:<br />
$ kubectl get rolebindings --all-namepsaces -o go-template \<br />
--template='<nowiki>{{range .items}}{{println}}{{.metadata.namespace}}={{range .subjects}}{{if eq .kind "User"}}{{.name}} {{end}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get the memory limit assigned to a container in a given Pod:<br />
<pre><br />
$ kubectl get pod example-pod-name -n default \<br />
-o jsonpath="{.spec.containers[*].resources.limits}" <br />
</pre><br />
<br />
* Get a Bash prompt of your current context and namespace:<br />
<pre><br />
NORMAL="\[\033[00m\]"<br />
BLUE="\[\033[01;34m\]"<br />
RED="\[\e[1;31m\]"<br />
YELLOW="\[\e[1;33m\]"<br />
GREEN="\[\e[1;32m\]"<br />
PS1_WORKDIR="\w"<br />
PS1_HOSTNAME="\h"<br />
PS1_USER="\u"<br />
<br />
__kube_ps1()<br />
{<br />
CONTEXT=$(kubectl config current-context)<br />
NAMESPACE=$(kubectl config view -o jsonpath="{.contexts[?(@.name==\"${CONTEXT}\")].context.namespace}")<br />
if [ -z "$NAMESPACE"]; then<br />
NAMESPACE="default"<br />
fi<br />
if [ -n "$CONTEXT" ]; then<br />
case "$CONTEXT" in<br />
*prod*)<br />
echo "${RED}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*test*)<br />
echo "${YELLOW}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*)<br />
echo "${GREEN}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
esac<br />
fi<br />
}<br />
<br />
export PROMPT_COMMAND='PS1="${GREEN}${PS1_USER}@${PS1_HOSTNAME}${NORMAL}:$(__kube_ps1)${BLUE}${PS1_WORKDIR}${NORMAL}\$ "'<br />
</pre><br />
<br />
===Client configuration===<br />
<br />
* Setup autocomplete in bash; bash-completion package should be installed first:<br />
$ source <(kubectl completion bash)<br />
<br />
* View Kubernetes config:<br />
$ kubectl config view<br />
<br />
* View specific config items by JSON path:<br />
$ kubectl config view -o jsonpath='{.users[?(@.name == "k8s")].user.password}'<br />
<br />
* Set credentials for foo.kuberntes.com:<br />
$ kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword<br />
<br />
===Viewing / finding resources===<br />
<br />
* List all services in the namespace:<br />
$ kubectl get services<br />
<br />
* List all pods in all namespaces in wide format:<br />
$ kubectl get pods -o wide --all-namespaces<br />
<br />
* List all pods in JSON (or YAML) format:<br />
$ kubectl get pods -o json<br />
<br />
* Describe resource details (node, pod, svc):<br />
$ kubectl describe nodes my-node<br />
<br />
* List services sorted by name:<br />
$ kubectl get services --sort-by=.metadata.name<br />
<br />
* List pods sorted by restart count:<br />
$ kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'<br />
<br />
* Rolling update pods for frontend-v1:<br />
$ kubectl rolling-update frontend-v1 -f frontend-v2.json<br />
<br />
* Scale a ReplicaSet named "foo" to 3:<br />
$ kubectl scale --replicas=3 rs/foo<br />
<br />
* Scale a resource specified in "foo.yaml" to 3:<br />
$ kubectl scale --replicas=3 -f foo.yaml<br />
<br />
* Execute a command in every pod / replica:<br />
$ for i in 0 1; do kubectl exec foo-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done<br />
<br />
* Get a list of ''all'' container IDs running in ''all'' Pods in ''all'' namespaces for a given Kubernetes cluster:<br />
<pre><br />
$ kubectl get pods --all-namespaces \<br />
-o jsonpath='{range .items[*]}{"pod: "}{.metadata.name}{"\n"}{range .status.containerStatuses[*]}{"\tname: "}{.containerID}{"\n\timage: "}{.image}{"\n"}{end}'<br />
<br />
# Example output:<br />
pod: cert-manager-848f547974-8m2k6<br />
name: containerd://358415173310a528a36ca2c19cdc3319f8fd96634c09957977767333b104d387<br />
image: quay.io/jetstack/cert-manager-controller:v1.5.3<br />
</pre><br />
<br />
===Manage resources===<br />
<br />
* Get documentation for pod or service:<br />
$ kubectl explain pods,svc<br />
<br />
* Create resource(s) like pods, services or DaemonSets:<br />
$ kubectl create -f ./my-manifest.yaml<br />
<br />
* Apply a configuration to a resource:<br />
$ kubectl apply -f ./my-manifest.yaml<br />
<br />
* Start a single instance of Nginx:<br />
$ kubectl run nginx --image=nginx<br />
<br />
* Create a secret with several keys:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysecret<br />
type: Opaque<br />
data:<br />
password: $(echo "s33msi4" | base64)<br />
username: $(echo "jane"| base64)<br />
EOF<br />
</pre><br />
<br />
* Delete a resource:<br />
$ kubectl delete -f ./my-manifest.yaml<br />
<br />
===Monitoring and logging===<br />
<br />
* Deploy Heapster from Github repository:<br />
$ kubectl create -f deploy/kube-config/standalone/<br />
<br />
* Show metrics for nodes:<br />
$ kubectl top node<br />
<br />
* Show metrics for pods:<br />
$ kubectl top pod<br />
<br />
* Show metrics for a given pod and its containers:<br />
$ kubectl top pod pod_name --containers<br />
<br />
* Dump pod logs (STDOUT):<br />
$ kubectl logs pod_name<br />
<br />
* Stream pod container logs (STDOUT, multi-container case):<br />
$ kubectl logs -f pod_name -c my-container<br />
<br />
<!-- TODO: https://gist.github.com/so0k/42313dbb3b547a0f51a547bb968696ba --><br />
<br />
===Run tcpdump on containers running in Pods===<br />
<br />
* Find which node/host/IP the Pod in question is running on and also get the container ID:<br />
<pre><br />
$ kubectl describe pod busybox | grep -E "^Node:|Container ID: "<br />
Node: worker2/10.39.32.122<br />
Container ID: docker://a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03<br />
<br />
#~OR~<br />
<br />
$ containerID=$(kubectl get po busybox -o jsonpath='{.status.containerStatuses[*].containerID}' | sed -e 's|docker://||g')<br />
$ hostIP=$(kubectl get po busybox -o jsonpath='{.status.hostIP}')<br />
</pre><br />
<br />
Log into the node/host running the Pod in question and then perform the following steps.<br />
<br />
* Get the virtual interface ID (note it will depend on which Container Network Interface you are using {e.g., veth, cali, etc.}):<br />
<pre><br />
$ docker exec a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03 /bin/sh -c 'cat /sys/class/net/eth0/iflink'<br />
12<br />
<br />
# List all non-virtual interfaces:<br />
$ for iface in $(find /sys/class/net/ -type l ! -lname '*/devices/virtual/net/*' -printf '%f '); do echo "$iface is not virtual"; done<br />
ens192 is not virtual<br />
<br />
# Check if we are using veth or cali or something else:<br />
$ ls -1 /sys/class/net/ | awk '!/docker|lo|ens/{print substr($0,0,4);exit}'<br />
cali<br />
<br />
$ for i in /sys/class/net/veth*/ifindex; do grep -l 12 $i; done<br />
#~OR~<br />
$ for i in /sys/class/net/cali*/ifindex; do grep -l 12 $i; done<br />
/sys/class/net/cali12d4a061371/ifindex<br />
#~OR~<br />
echo $(find /sys/class/net/ -type l -lname '*/devices/virtual/net/*' -exec grep -l 12 {}/ifindex \;) | awk -F'/' '{print $5}'<br />
cali12d4a061371<br />
#~OR~<br />
$ ip link | grep ^12<br />
12: cali12d4a061371@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default<br />
#~OR~<br />
$ ip link | awk '/^12/{print $2}' | awk -F'@' '{print $1}'<br />
cali12d4a061371<br />
</pre><br />
<br />
* Now run [[tcpdump]] on this virtual interface (note: make sure you are running tcpdump on the ''same'' host as the Pod is running on):<br />
$ sudo tcpdump -i cali12d4a061371<br />
<br />
; Self-signed certificates<br />
<br />
If you are using the latest version of <code>kubectl</code> and are running it against a k8s cluster built with a self-signed cert, you can get around any "x509" errors with:<br />
$ export GODEBUG=x509ignoreCN=0<br />
<br />
===API resources===<br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ time for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
<br />
real 1m20.014s<br />
user 0m52.732s<br />
sys 0m17.751s<br />
</pre><br />
<br />
* Note: if you just want a version for a single/given kind:<br />
<pre><br />
$ kubectl explain deploy | head -2<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
</pre><br />
<br />
===kubectl-neat===<br />
<br />
: See: https://github.com/itaysk/kubectl-neat<br />
: See: [[jq]]<br />
<br />
* To easily copy a certificate secret from one namespace to another namespace run:<br />
<pre><br />
$ SOURCE_NAMESPACE=<update-me><br />
$ DESTINATION_NAMESPACE=<update-me><br />
$ kubectl -n ${SOURCE_NAMESPACE} get secret kafka-client-credentials -o json |\<br />
kubectl neat |\<br />
jq 'del(.metadata["namespace"])' |\<br />
kubectl apply -n ${DESTINATION_NAMESPACE} -f -<br />
</pre><br />
<br />
===Get CPU/memory for each node===<br />
<br />
<pre><br />
for node in $(kubectl get nodes -o=jsonpath='{.items[*].metadata.name}'); do<br />
echo "NODE: ${node}"; kubectl describe node ${node} | grep -E '^ cpu |^ memory ';<br />
done<br />
</pre><br />
<br />
===Get vCPU capacity===<br />
<br />
<pre><br />
$ kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"} \<br />
{.status.capacity.cpu}{\"\n\"}{end}"<br />
</pre><br />
<br />
==Miscellaneous examples==<br />
<br />
* Create a Namespace:<br />
<pre><br />
kind: Namespace<br />
apiVersion: v1<br />
metadata:<br />
name: my-namespace<br />
</pre><br />
<br />
; Testing the load balancing capabilities of a Service<br />
<br />
* Create a Deployment with two replicas of Nginx (i.e., 2 x Pods with identical containers, configuration, etc.):<br />
<pre><br />
$ cat << EOF >nginx-deploy.yml<br />
kind: Deployment<br />
apiVersion: apps/v1<br />
metadata:<br />
name: nginx-deploy<br />
spec:<br />
replicas: 2<br />
strategy:<br />
rollingUpdate:<br />
maxSurge: 1<br />
maxUnavailable: 0<br />
type: RollingUpdate<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
$ kubectl create --validate -f nginx-deploy.yml<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deploy 2 2 2 2 1h<br />
$ kubectl get po<br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deploy-8d68fb6cc-bspt8 1/1 Running 1 1h<br />
nginx-deploy-8d68fb6cc-qdvhg 1/1 Running 1 1h<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: nginx-svc<br />
spec:<br />
ports:<br />
- port: 8080<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
<br />
$ kubectl get svc/nginx-svc<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
nginx-svc ClusterIP 10.101.133.100 <none> 8080/TCP 1h<br />
</pre><br />
<br />
* Overwrite the default index.html file (note: This is ''not'' persistent. The original default index.html file will be restored if the Pod fails and the Deployment brings up a new Pod and/or if you modify your Deployment {e.g., upgrade Nginx}. This is just for demonstration purposes):<br />
$ kubectl exec -it nginx-8d68fb6cc-bspt8 -- sh -c 'echo "pod-01" > /usr/share/nginx/html/index.html'<br />
$ kubectl exec -it nginx-8d68fb6cc-qdvhg -- sh -c 'echo "pod-02" > /usr/share/nginx/html/index.html'<br />
<br />
* Get the HTTP status code and server value from the header of a request to the Service endpoint:<br />
$ curl -Is 10.101.133.100:8080 | grep -E '^HTTP|Server'<br />
HTTP/1.1 200 OK<br />
Server: nginx/1.7.9 # <- This is the version of Nginx we defined in the Deployment above<br />
<br />
* Perform a GET request on the Service endpoint (ClusterIP+Port):<br />
<pre><br />
$ for i in $(seq 1 10); do curl -s 10.101.133.100:8080; done<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-02<br />
</pre><br />
Sometimes <code>pod-01</code> responded; sometimes <code>pod-02</code> responded.<br />
<br />
* Perform a GET on the Service endpoint 10,000 times and sum up which Pod responded for each request:<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
5018 pod-01 # <- number of times pod-01 responded to the request<br />
4982 pod-02 # <- number of times pod-02 responded to the request<br />
<br />
real 1m0.639s<br />
user 0m29.808s<br />
sys 0m11.692s<br />
</pre><br />
<br />
$ awk 'BEGIN{print 5018/(5018+4982);}'<br />
0.5018<br />
$ awk 'BEGIN{print 4982/(5018+4982);}'<br />
0.4982<br />
<br />
So, our Service is "load balancing" our two Nginx Pods in a roughly 50/50 fashion.<br />
<br />
In order to double-check that the Service is randomly selecting a Pod to serve the GET request, let's scale our Deployment from 2 to 3 replicas:<br />
$ kubectl scale deploy/nginx-deploy --replicas=3<br />
<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
3392 pod-01<br />
3335 pod-02<br />
3273 pod-03<br />
<br />
real 0m59.537s<br />
user 0m25.932s<br />
sys 0m9.656s<br />
</pre><br />
$ awk 'BEGIN{print 3392/(3392+3335+3273);}'<br />
0.3392<br />
$ awk 'BEGIN{print 3335/(3392+3335+3273);}'<br />
0.3335<br />
$ awk 'BEGIN{print 3273/(3392+3335+3273);}'<br />
0.3273<br />
<br />
Sure enough. Each of the 3 Pods is serving the GET request roughly 33% of the time.<br />
<br />
; Query selections<br />
<br />
* Create a "query selection" file:<br />
<pre><br />
$ cat << EOF >cluster-nodes-health.txt<br />
Name Kernel InternalIP MemoryPressure DiskPressure PIDPressure Ready<br />
.metadata.name .status.nodeInfo.kernelVersion .status.addresses[0].address .status.conditions[0].status .status.conditions[1].status .status.conditions[2].status .status.conditions[3].status<br />
EOF<br />
</pre><br />
<br />
* Use the above "query selection" file:<br />
<pre><br />
$ kubectl get nodes -o custom-columns-file=cluster-nodes-health.txt<br />
Name Kernel InternalIP MemoryPressure DiskPressure PIDPressure Ready<br />
10.10.10.152 5.4.0-1084-aws 10.10.10.152 False False False False<br />
10.10.11.12 5.4.0-1092-aws 10.10.11.12 False False False False<br />
10.10.12.22 5.4.0-1039-aws 10.10.12.22 False False False False<br />
</pre><br />
<br />
==Example YAML files==<br />
<br />
* Basic Pod using busybox:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod using busybox, which also prints out environment variables (including the ones defined in the YAML):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: env-dump<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- env<br />
env:<br />
- name: USERNAME<br />
value: "Christoph"<br />
- name: PASSWORD<br />
value: "mypassword"<br />
</pre><br />
$ kubectl logs env-dump<br />
...<br />
PASSWORD=mypassword<br />
USERNAME=Christoph<br />
...<br />
<br />
* Basic Pod using alpine:<br />
<pre><br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: alpine<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: alpine<br />
image: alpine<br />
command:<br />
- /bin/sh<br />
- "-c"<br />
- "sleep 60m"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod running Nginx:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx-pod<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create a Job that calculates pi up to 2000 decimal places:<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
<br />
* Create a Deployment with two replicas of Nginx running:<br />
<pre><br />
apiVersion: apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
spec:<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
replicas: 2 <br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.9.1<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Create a basic Persistent Volume, which uses NFS:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 172.31.119.58<br />
readOnly: false<br />
</pre><br />
<br />
* Create a Persistent Volume Claim against the above PV:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
</pre><br />
<br />
* Create a Pod using a customer scheduler (i.e., not the default one):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: my-custom-scheduler<br />
annotations:<br />
scheduledBy: custom-scheduler<br />
spec:<br />
schedulerName: custom-scheduler<br />
containers:<br />
- name: pod-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Install k8s cluster manually in the Cloud==<br />
<br />
''Note: For this example, I will be using AWS and I will assume you already have 3 x EC2 instances running CentOS 7 in your AWS account. I will install Kubernetes 1.10.x.''<br />
<br />
* Disable services not supported (yet) by Kubernetes:<br />
$ sudo setenforce 0 # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
<br />
$ sudo systemctl stop firewalld<br />
$ sudo systemctl mask firewalld<br />
$ sudo yum install -y iptables-services<br />
<br />
* Disable swap:<br />
$ sudo swapoff -a # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo vi /etc/fstab # comment out swap line<br />
$ sudo mount -a<br />
<br />
* Make sure routed traffic does not bypass iptables:<br />
$ cat << EOF > /etc/sysctl.d/k8s.conf<br />
net.bridge.bridge-nf-call-ip6tables = 1<br />
net.bridge.bridge-nf-call-iptables = 1<br />
EOF<br />
$ sudo sysctl --system<br />
<br />
* Install <code>kubelet</code>, <code>kubeadm</code>, and <code>kubectl</code> on '''''all''''' nodes in your cluster (both Master and Worker nodes):<br />
<pre><br />
$ cat << EOF > /etc/yum.repos.d/kubernetes.repo<br />
[kubernetes]<br />
name=Kubernetes<br />
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch<br />
enabled=1<br />
gpgcheck=1<br />
repo_gpgcheck=1<br />
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg<br />
EOF<br />
</pre><br />
<br />
$ sudo yum install -y kubelet kubeadm kubectl<br />
$ sudo systemctl enable kubelet && sudo systemctl start kubelet<br />
<br />
* Configure cgroup driver used by kubelet on '''''all''''' nodes (both Master and Worker nodes):<br />
<br />
Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config:<br />
<br />
$ docker info | grep -i cgroup<br />
$ grep -i cgroup /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
If the Docker cgroup driver and the kubelet config do not match, change the kubelet config to match the Docker cgroup driver. The flag you need to change is <code>--cgroup-driver</code>. If it is already set, you can update like so:<br />
<br />
$ sudo sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
Otherwise, you will need to open the systemd file and add the flag to an existing environment line.<br />
<br />
Then restart kubelet:<br />
<br />
$ sudo systemctl daemon-reload<br />
$ sudo systemctl restart kubelet<br />
<br />
* Run <code>kubeadm</code> on Master node:<br />
<br />
K8s requires a pod network to function. We are going to use Flannel, so we need to pass in a flag to the deployment script so k8s knows how to configure itself:<br />
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16<br />
<br />
Note: This command might take a fair amount of time to complete.<br />
<br />
Once it has completed, make note of the "<code>join</code>" command output by <code>kubeadm init</code> that looks something like the following ('''DO NOT RUN THE FOLLOWING COMMAND YET!'''):<br />
# kubeadm join --token --discovery-token-ca-cert-hash sha256:<br />
<br />
You will run that command on the other non-master nodes (aka the "Worker Nodes") to allow them to join the cluster. However, '''do not''' run that command on the worker nodes until you have completed all of the following steps.<br />
<br />
* Create a directory:<br />
$ mkdir -p $HOME/.kube<br />
<br />
* Copy the configuration files to a location usable by the local user:<br />
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config <br />
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config<br />
<br />
* In order for your pods to communicate with one another, you will need to install pod networking. We are going to use Flannel for our Container Network Interface (CNI) because it is easy to install and reliable. <br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</nowiki><br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml</nowiki><br />
<br />
* Make sure everything is coming up properly:<br />
$ kubectl get pods --all-namespaces --watch<br />
Once the <code>kube-dns-xxxx</code> containers are up (i.e., in Status "Running"), your cluster is ready to accept worker nodes.<br />
<br />
* On each of the Worker nodes, run the <code>sudo kubeadm join ...</code> command that <code>kubeadm init</code> created for you (see above).<br />
<br />
* On the Master Node, run the following command:<br />
$ kubectl get nodes --watch<br />
Once the Status of the Worker Nodes returns "Ready", your k8s cluster is ready to use.<br />
<br />
* Example output of successful Kubernetes cluster:<br />
<pre><br />
$ kubectl get nodes<br />
NAME STATUS ROLES AGE VERSION<br />
k8s-01 Ready master 13m v1.10.1<br />
k8s-02 Ready <none> 12m v1.10.1<br />
k8s-03 Ready <none> 12m v1.10.1<br />
</pre><br />
<br />
That's it! You are now ready to start deploying Pods, Deployments, Services, etc. in your Kubernetes cluster!<br />
<br />
==Bash completion==<br />
''Note: The following only works on newer versions. I have tested that this works on version 1.9.1.''<br />
<br />
Add the following line to your <code>~/.bashrc</code> file:<br />
source <(kubectl completion bash)<br />
<br />
==Kubectl plugins==<br />
<br />
SEE: [https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/ Extend kubectl with plugins] for details.<br />
<br />
: FEATURE STATE: Kubernetes v1.11 (alpha)<br />
: FEATURE STATE: Kubernetes v1.15 (stable)<br />
<br />
This section shows you how to install and write extensions for <code>kubectl</code>. Usually called "plugins" or "binary extensions", this feature allows you to extend the default set of commands available in <code>kubectl</code> by adding new sub-commands to perform new tasks and extend the set of features available in the main distribution of <code>kubectl</code>.<br />
<br />
Get code [https://github.com/kubernetes/kubernetes/tree/master/pkg/kubectl/plugins/examples from here].<br />
<br />
<pre><br />
.kube/<br />
└── plugins<br />
└── aging<br />
├── aging.rb<br />
└── plugin.yaml<br />
</pre><br />
<br />
$ chmod 0700 .kube/plugins/aging/aging.rb<br />
<br />
* See options:<br />
<pre><br />
$ kubectl plugin aging --help<br />
Aging shows pods from the current namespace by age.<br />
<br />
Usage:<br />
kubectl plugin aging [flags] [options]<br />
</pre><br />
<br />
* Usage:<br />
<pre><br />
$ kubectl plugin aging<br />
The Magnificent Aging Plugin.<br />
<br />
nginx-deployment-67594d6bf6-5t8m9: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-6kw9j: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-d8dwt: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
</pre><br />
<br />
==Local Kubernetes==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="6" bgcolor="#EFEFEF" | '''Local Kubernetes Comparisons'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Feature<br />
!kind<br />
!k3d<br />
!minikube<br />
!Docker Desktop<br />
!Rancher Desktop<br />
|- <br />
| Free || yes || yes || yes || Personal Small Business* || yes<br />
|--bgcolor="#eeeeee"<br />
| Install || easy || easy || easy || easy || medium (you may encounter odd scenarios)<br />
|-<br />
| Ease of Use || medium || medium || medium || easy || easy<br />
|--bgcolor="#eeeeee"<br />
| Stability || stable || stable || stable || stable || stable<br />
|-<br />
| Cross-platform || yes || yes || yes || yes || yes<br />
|--bgcolor="#eeeeee"<br />
| CI Usage || yes || yes || yes || no || no<br />
|-<br />
| Multiple clusters || yes || yes || yes || no || no<br />
|--bgcolor="#eeeeee"<br />
| Podman support || yes || yes || yes || no || no<br />
|-<br />
| Host volumes mount support || yes || yes || yes (with some performance limitations) || yes || yes (only pre-defined paths)<br />
|--bgcolor="#eeeeee"<br />
| Kubernetes service port-forwarding/mapping || yes || yes || yes || yes || yes<br />
|-<br />
| Pull-through Docker mirror/proxy || yes || yes || no || yes (can reference locally available images) || yes (can reference locally available images)<br />
|--bgcolor="#eeeeee"<br />
| Custom CNI || yes (ex: calico) || yes (ex: flannel) || yes (ex: calico) || no || no<br />
|-<br />
| Features Gates || yes || yes || yes || yes (but not natively; requires hacky setup) || yes (but not natively; requires hacky setup)<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
[https://bmiguel-teixeira.medium.com/local-kubernetes-the-one-above-all-3aedbeb5f3f6 Source]<br />
<br />
==See also==<br />
* [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
* [[Kubernetes/GKE|Google Kubernetes Engine]] (GKE)<br />
* [[Kubernetes/AWS|Kubernetes on AWS]] (EKS)<br />
* [[Kubeless]]<br />
* [[Helm]]<br />
<br />
==External links==<br />
* [http://kubernetes.io/ Official website]<br />
* [https://github.com/kubernetes/kubernetes Kubernetes code] &mdash; via GitHub<br />
===Playgrounds===<br />
* [https://www.katacoda.com/courses/kubernetes/playground Kubernetes Playground]<br />
* [https://labs.play-with-k8s.com Play with k8s]<br />
===Tools===<br />
* [https://github.com/kubernetes/minikube minikube] &mdash; Run Kubernetes locally<br />
* [https://kind.sigs.k8s.io/ kind] &mdash; '''K'''ubernetes '''IN''' '''D'''ocker (local clusters for testing Kubernetes)<br />
* [https://github.com/kubernetes/kops kops] &mdash; Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management<br />
* [https://kubernetes-incubator.github.io/kube-aws kube-aws] &mdash; a command-line tool to create/update/destroy Kubernetes clusters on AWS<br />
* [https://github.com/kubernetes-incubator/kubespray kubespray] &mdash; Deploy a production ready kubernetes cluster<br />
* [https://rook.io/ Rook.io] &mdash; File, Block, and Object Storage Services for your Cloud-Native Environments<br />
===Resources===<br />
* [https://kubernetes.io/docs/getting-started-guides/scratch/ Creating a Custom Cluster from Scratch]<br />
* [https://github.com/kelseyhightower/kubernetes-the-hard-way Kubernetes The Hard Way]<br />
* [http://k8sport.org/ K8sPort]<br />
* [https://k8s.af/ Kubernetes Failure Stories]<br />
<br />
===Training===<br />
* [https://kubernetes.io/training/ Official Kubernetes Training Website]<br />
** Kubernetes and Cloud Native Associate (KCNA)<br />
** Certified Kubernetes Application Developer (CKAD)<br />
** Certified Kubernetes Administrator (CKA)<br />
** Certified Kubernetes Security Specialist (CKS) [note: Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS.]<br />
* [https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals Kubernetes Fundamentals] (LFS258)<br />
** ''[https://www.cncf.io/certification/expert/ Certified Kubernetes Administrator]'' (PKA) certification.<br />
* [https://killer.sh/ CKS / CKA / CKAD Simulator]<br />
* [https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/ 11 Ways (Not) to Get Hacked]<br />
<br />
===Blog posts===<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727 Understanding kubernetes networking: pods] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82 Understanding kubernetes networking: services] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-ingress-1bc341c84078 Understanding kubernetes networking: ingress] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b Kubernetes ConfigMaps and Secrets - Part 1] &mdash; by Sandeep Dinesh, 2017-07-13<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-part-2-3dc37111f0dc Kubernetes ConfigMaps and Secrets - Part 2] &mdash; by Sandeep Dinesh, 2017-08-08<br />
* [https://abhishek-tiwari.com/10-open-source-tools-for-highly-effective-kubernetes-sre-and-ops-teams/ 10 open-source Kubernetes tools for highly effective SRE and Ops Teams]<br />
* [https://www.ianlewis.org/en/tag/kubernetes Series of blog posts about k8s] &mdash; by Ian Lewis<br />
* [https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?] &mdash; by Sandeep Dinesh, 2018-03-11<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:DevOps]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Kubernetes&diff=8258Kubernetes2023-01-23T19:45:37Z<p>Christoph: /* Release history */</p>
<hr />
<div>'''Kubernetes''' (also known by its numeronym '''k8s''') is an open source container cluster manager. Kubernetes' primary goal is to provide a platform for automating deployment, scaling, and operations of application containers across a cluster of hosts. Kubernetes was released by Google on July 2015.<br />
<br />
* Get the latest stable release of k8s with:<br />
$ curl -sSL <nowiki>https://dl.k8s.io/release/stable.txt</nowiki><br />
<br />
==Release history==<br />
<br />
NOTE: There is no such thing as Kubernetes Long-Term-Support (LTS). There is a new "minor" release ''roughly'' every 3 months (note: changed to ''roughly'' every 4 months in 2020).<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="3" bgcolor="#EFEFEF" | '''Kubernetes release history'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Release<br />
!Date<br />
!Cadence (days)<br />
|- align="left"<br />
|1.0 || 2015-07-10 ||align="right"|<br />
|--bgcolor="#eeeeee"<br />
|1.1 || 2015-11-09 ||align="right"| 122<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.2.md 1.2] || 2016-03-16 ||align="right"| 128<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.3.md 1.3] || 2016-07-01 ||align="right"| 107<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.4.md 1.4] || 2016-09-26 ||align="right"| 87<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.5.md 1.5] || 2016-12-12 ||align="right"| 77<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.6.md 1.6] || 2017-03-28 ||align="right"| 106<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.7.md 1.7] || 2017-06-30 ||align="right"| 94<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.8.md 1.8] || 2017-09-28 ||align="right"| 90<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.9.md 1.9] || 2017-12-15 ||align="right"| 78<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md 1.10] || 2018-03-26 ||align="right"| 101<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md 1.11] || 2018-06-27 ||align="right"| 93<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.12.md 1.12] || 2018-09-27 ||align="right"| 92<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.13.md 1.13] || 2018-12-03 ||align="right"| 67<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.14.md 1.14] || 2019-03-25 ||align="right"| 112<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md 1.15] || 2019-06-17 ||align="right"| 84<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md 1.16] || 2019-09-18 ||align="right"| 93<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md 1.17] || 2019-12-09 ||align="right"| 82<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md 1.18] || 2020-03-25 ||align="right"| 107<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md 1.19] || 2020-08-26 ||align="right"| 154<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md 1.20] || 2020-12-08 ||align="right"| 104<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md 1.21] || 2021-04-08 ||align="right"| 121<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md 1.22] || 2021-08-04 ||align="right"| 118<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md 1.23] || 2021-12-07 ||align="right"| 125<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md 1.24] || 2022-05-03 ||align="right"| 147<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md 1.25] || 2022-08-23 ||align="right"| 112<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md 1.26] || 2023-01-18 ||align="right"| 148<br />
|}<br />
</div><br />
<br clear="all"/><br />
See: [https://gravitational.com/blog/kubernetes-release-cycle The full-time job of keeping up with Kubernetes]<br />
<br />
==Providers and installers==<br />
<br />
* Vanilla Kubernetes<br />
* AWS:<br />
** Managed: EKS<br />
** Kops<br />
** Kube-AWS<br />
** Kismatic<br />
** Kubicorn<br />
** Stack Point Cloud<br />
* Google:<br />
** Managed: GKE<br />
** [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
** Stack Point Cloud<br />
** Typhoon<br />
* Azure AKS<br />
* Ubuntu UKS<br />
* VMware PKS<br />
* [[Rancher|Rancher RKE]]<br />
* CoreOS Tectonic<br />
<br />
==Design overview==<br />
Kubernetes is built through the definition of a set of components (building blocks or "primitives") which, when used collectively, provide a method for the deployment, maintenance, and scalability of container-based application clusters.<br />
<br />
These "primitives" are designed to be ''loosely coupled'' (i.e., where little to no knowledge of the other component definitions is needed to use) as well as easily extensible through an API. Both the internal components of Kubernetes as well as the extensions and containers make use of this API.<br />
<br />
==Components==<br />
The building blocks of Kubernetes are the following (note that these are also referred to as Kubernetes "Objects" or "API Primitives"):<br />
<br />
;Cluster : A cluster is a set of machines (physical or virtual) on which your applications are managed and run. All machines are managed as a cluster (or set of clusters, depending on the topology used).<br />
;Nodes (minions) : You can think of these as "container clients". These are the individual hosts (physical or virtual) that Docker is installed on and hosts the various containers within your managed cluster.<br />
: Each node will run etcd (a key-pair management and communication service, used by Kubernetes for exchanging messages and reporting on cluster status) as well as the Kubernetes Proxy.<br />
;Pods : A pod consists of one or more containers. Those containers are guaranteed (by the cluster controller) to be located on the same host machine (aka "co-located") in order to facilitate sharing of resources. For an example, it makes sense to have database processes and data containers as close as possible. In fact, they really should be in the same pod.<br />
: Pods "work together", as in a multi-tiered application configuration. Each set of pods that define and implement a service (e.g., MySQL or Apache) are defined by the label selector (see below).<br />
: Pods are assigned unique IPs within each cluster. These allow an application to use ports without having to worry about conflicting port utilization.<br />
: Pods can contain definitions of disk volumes or shares, and then provide access from those to all the members (containers) within the pod.<br />
: Finally, pod management is done through the API or delegated to a controller.<br />
;Labels : Clients can attach key-value pairs to any object in the system (e.g., Pods or Nodes). These become the labels that identify them in the configuration and management of them. The key-value pairs can be used to filter, organize, and perform mass operations on a set of resources.<br />
;Selectors : Label Selectors represent queries that are made against those labels. They resolve to the corresponding matching objects. A Selector expression matches labels to filter certain resources. For example, you may want to search for all pods that belong to a certain service, or find all containers that have a specific tier Label value as "database". Labels and Selectors are inherently two sides of the same coin. You can use Labels to classify resources and use Selectors to find them and use them for certain actions.<br />
: These two items are the primary way that grouping is done in Kubernetes and determine which components that a given operation applies to when indicated.<br />
;Controllers : These are used in the management of your cluster. Controllers are the mechanism by which your desired configuration state is enforced.<br />
: Controllers manage a set of pods and, depending on the desired configuration state, may engage other controllers to handle replication and scaling (Replication Controller) of X number of containers and pods across the cluster. It is also responsible for replacing any container in a pod that fails (based on the desired state of the cluster).<br />
: Replication Controllers (RC) are a subset of Controllers and are an abstraction used to manage pod lifecycles. One of the key uses of RCs is to maintain a certain number of running Pods (e.g., for scaling or ensuring that at least one Pod is running at all times, etc.). It is considered a "best practice" to use RCs to define Pod lifecycles, rather than creating Pods directly.<br />
: Other controllers that can be engaged include a ''DaemonSet Controller'' (enforces a 1-to-1 ratio of pods to Worker Nodes) and a ''Job Controller'' (that runs pods to "completion", such as in batch jobs).<br />
: Each set of pods any controller manages, is determined by the label selectors that are part of its definition.<br />
;Replica Sets: These define how many replicas of each Pod will be running. They also monitor and ensure the required number of Pods are running, replacing Pods that die. Replica Sets can act as replacements for Replication Controllers.<br />
;Services : A Service is an abstraction on top of Pods, which provides a single IP address and DNS name by which the Pods can be accessed. This load balancing configuration is much easier to manage and helps scale Pods seamlessly.<br />
: Kubernetes can then provide service discovery and handle routing with the static IP for each pod as well as load balancing (round-robin based) connections to that service among the pods that match the label selector indicated.<br />
: By default, although a service is only exposed inside a cluster, it can also be exposed outside a cluster, as needed.<br />
;Volumes : A Volume is a directory with data, which is accessible to a container. The volume co-terminates with the Pods that encloses it.<br />
;Name : A name by which a resource is identified.<br />
;Namespace : A Namespace provides additional qualification to a resource name. This is especially helpful when multiple teams/projects are using the same cluster and there is a potential for name collision. You can think of a Namespace as a virtual wall between multiple clusters.<br />
;Annotations : An Annotation is a Label, but with much larger data capacity. Typically, this data is not readable by humans and is not easy to filter through. Annotation is useful only for storing data that may not be searched, but is required by the resource (e.g., storing strong keys, etc.).<br />
;Control Pane<br />
;API<br />
<br />
===Pods===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/ Pod]'' is the smallest and simplest Kubernetes object. It is the unit of deployment in Kubernetes, which represents a single instance of the application. A Pod is a logical collection of one or more containers, which:<br />
<br />
* are scheduled together on the same host;<br />
* share the same network namespace; and<br />
* mount the same external storage (Volumes).<br />
<br />
Pods are ephemeral in nature, and they do not have the capability to self-heal by themselves. That is why we use them with controllers, which can handle a Pod's replication, fault tolerance, self-heal, etc. Examples of controllers are ''Deployments'', ''ReplicaSets'', ''ReplicationControllers'', etc. We attach the Pod's specification to other objects using Pod Templates (see below).<br />
<br />
===Labels===<br />
Labels are key-value pairs that can be attached to any Kubernetes object (e.g. ''Pods''). Labels are used to organize and select a subset of objects, based on the requirements in place. Many objects can have the same label(s). Labels do not provide uniqueness to objects. <br />
<br />
===Label Selectors===<br />
With Label Selectors, we can select a subset of objects. Kubernetes supports two types of Selectors:<br />
<br />
;Equality-Based Selectors : Equality-Based Selectors allow filtering of objects based on label keys and values. With this type of Selector, we can use the <code>=</code>, <code>==</code>, or <code>!=</code> operators. For example, with <code>env==dev</code>, we are selecting the objects where the "<code>env</code>" label is set to "<code>dev</code>".<br />
;Set-Based Selectors : Set-Based Selectors allow filtering of objects based on a set of values. With this type of Selector, we can use the <code>in</code>, <code>notin</code>, and <code>exist</code> operators. For example, with <code>env in (dev,qa)</code>, we are selecting objects where the "<code>env</code>" label is set to "<code>dev</code>" or "<code>qa</code>".<br />
<br />
===Replication Controllers===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/ ReplicationController]'' (rc) is a controller that is part of the Master Node's Controller Manager. It makes sure the specified number of replicas for a Pod is running at any given point in time. If there are more Pods than the desired count, the ReplicationController would kill the extra Pods, and, if there are less Pods, then the ReplicationController would create more Pods to match the desired count. Generally, we do not deploy a Pod independently, as it would not be able to re-start itself if something goes wrong. We always use controllers like ReplicationController to create and manage Pods.<br />
<br />
===Replica Sets===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ ReplicaSet]'' (rs) is the next-generation ReplicationController. ReplicaSets support both equality- and set-based Selectors, whereas ReplicationControllers only support equality-based Selectors. As of January 2018, this is the only difference.<br />
<br />
As an example, say you create a ReplicaSet where you defined a "desired replicas = 3" (and set "<code>current==desired</code>"), any time "<code>current!=desired</code>" (i.e., one of the Pods dies) the ReplicaSet will detect that the current state is no longer matching the desired state. So, in our given scenario, the ReplicaSet will create one more Pod, thus ensuring that the current state matches the desired state.<br />
<br />
ReplicaSets can be used independently, but they are mostly used by Deployments to orchestrate the Pod creation, deletion, and updates. A Deployment automatically creates the ReplicaSets, and we do not have to worry about managing them.<br />
<br />
===Deployments===<br />
''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' objects provide declarative updates to Pods and ReplicaSets. The DeploymentController is part of the Master Node's Controller Manager, and it makes sure that the current state always matches the desired state.<br />
<br />
As an example, let's say we have a Deployment which creates a "ReplicaSet A". ReplicaSet A then creates 3 Pods. In each Pod, one of the containers uses the <code>nginx:1.7.9</code> image.<br />
<br />
Now, in the Deployment, we change the Pod's template and we update the image for the Nginx container from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>. As we have modified the Pod's template, a new "ReplicaSet B" gets created. This process is referred to as a "Deployment rollout". (A rollout is only triggered when we update the Pod's template for a deployment. Operations like scaling the deployment do not trigger the deployment.) Once ReplicaSet B is ready, the Deployment starts pointing to it.<br />
<br />
On top of ReplicaSets, Deployments provide features like Deployment recording, with which, if something goes wrong, we can rollback to a previously known state.<br />
<br />
===Namespaces===<br />
If we have numerous users whom we would like to organize into teams/projects, we can partition the Kubernetes cluster into sub-clusters using ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces]''. The names of the resources/objects created inside a Namespace are unique, but not across Namespaces.<br />
<br />
To list all the Namespaces, we can run the following command:<br />
$ kubectl get namespaces<br />
NAME STATUS AGE<br />
default Active 2h<br />
kube-public Active 2h<br />
kube-system Active 2h<br />
<br />
Generally, Kubernetes creates two default namespaces: <code>kube-system</code> and <code>default</code>. The <code>kube-system</code> namespace contains the objects created by the Kubernetes system. The <code>default</code> namespace contains the objects which belong to any other namespace. By default, we connect to the <code>default</code> Namespace. <code>kube-public</code> is a special namespace, which is readable by all users and used for special purposes, like bootstrapping a cluster. <br />
<br />
Using ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ Resource Quotas]'', we can divide the cluster resources within Namespaces.<br />
<br />
===Component services===<br />
The component services running on a standard master/worker node(s) Kubernetes setup are as follows:<br />
* Kubernetes Master node(s)<br />
*; kube-apiserver : Exposes Kubernetes APIs<br />
*; kube-controller-manager : Runs controllers to handle nodes, endpoints, etc.<br />
*; kube-scheduler : Watches for new pods and assigns them nodes<br />
*; etcd : Distributed key-value store<br />
*; DNS : [optional] DNS for Kubernetes services<br />
* Worker node(s)<br />
*; kubelet : Manages pods on a node, volumes, secrets, creating new containers, health checks, etc.<br />
*; kube-proxy : Maintains network rules, port forwarding, etc.<br />
<br />
==Setup a Kubernetes cluster==<br />
<br />
<div style="margin: 10px; padding: 5px; border: 2px solid red;">'''IMPORTANT''': The following is how to setup Kubernetes 1.2 that is, as of January 2018, a very old version. I will update this article with how to setup k8s using a much newer version (v1.9) when I have time.<br />
</div><br />
<br />
In this section, I will show you how to setup a Kubernetes cluster with etcd and Docker. The cluster will consist of 1 master node and 3 worker nodes.<br />
<br />
===Setup VMs===<br />
<br />
For this demo, I will be creating 4 VMs via [[Vagrant]] (with VirtualBox).<br />
<br />
* Create Vagrant demo environment:<br />
$ mkdir $HOME/dev/kubernetes && cd $_<br />
<br />
* Create Vagrantfile with the following contents:<br />
<pre><br />
# -*- mode: ruby -*-<br />
# vi: set ft=ruby :<br />
<br />
require 'yaml'<br />
VAGRANTFILE_API_VERSION = "2"<br />
<br />
$common_script = <<COMMON_SCRIPT<br />
# Set verbose<br />
set -v<br />
# Set exit on error<br />
set -e<br />
echo -e "$(date) [INFO] Starting modified Vagrant..."<br />
sudo yum update -y<br />
# Timestamp provision<br />
date > /etc/vagrant_provisioned_at<br />
COMMON_SCRIPT<br />
<br />
unless defined? CONFIG<br />
configuration_file = File.join(File.dirname(__FILE__), 'vagrant_config.yml')<br />
CONFIG = YAML.load(File.open(configuration_file, File::RDONLY).read)<br />
end<br />
<br />
CONFIG['box'] = {} unless CONFIG.key?('box')<br />
<br />
def modifyvm_network(node)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--nicpromisc1", "allow-all"]<br />
#vbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]<br />
vbox.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]<br />
end<br />
end<br />
<br />
def modifyvm_resources(node, memory, cpus)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--memory", memory]<br />
vbox.customize ["modifyvm", :id, "--cpus", cpus]<br />
end<br />
end<br />
<br />
## START: Actual Vagrant process<br />
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|<br />
<br />
config.vm.box = CONFIG['box']['name']<br />
<br />
# Uncomment the following line if you wish to be able to pass files from<br />
# your local filesystem directly into the vagrant VM:<br />
#config.vm.synced_folder "data", "/vagrant"<br />
<br />
## VM: k8s master #############################################################<br />
config.vm.define "master" do |node|<br />
node.vm.hostname = "k8s.master.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
#node.vm.network "forwarded_port", guest: 80, host: 8080<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['master']<br />
<br />
# Uncomment the following if you wish to define CPU/memory:<br />
#node.vm.provider "virtualbox" do |vbox|<br />
# vbox.customize ["modifyvm", :id, "--memory", "4096"]<br />
# vbox.customize ["modifyvm", :id, "--cpus", "2"]<br />
#end<br />
#modifyvm_resources(node, "4096", "2")<br />
end<br />
## VM: k8s minion1 ############################################################<br />
config.vm.define "minion1" do |node|<br />
node.vm.hostname = "k8s.minion1.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion1']<br />
end<br />
## VM: k8s minion2 ############################################################<br />
config.vm.define "minion2" do |node|<br />
node.vm.hostname = "k8s.minion2.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion2']<br />
end<br />
## VM: k8s minion3 ############################################################<br />
config.vm.define "minion3" do |node|<br />
node.vm.hostname = "k8s.minion3.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion3']<br />
end<br />
###############################################################################<br />
<br />
end<br />
</pre><br />
<br />
The above Vagrantfile uses the following configuration file:<br />
$ cat vagrant_config.yml<br />
<pre><br />
---<br />
box:<br />
name: centos/7<br />
storage_controller: 'SATA Controller'<br />
debug: false<br />
development: false<br />
network:<br />
dns1: 8.8.8.8<br />
dns2: 8.8.4.4<br />
internal:<br />
network: 192.168.200.0/24<br />
external:<br />
start: 192.168.100.100<br />
end: 192.168.100.200<br />
network: 192.168.100.0/24<br />
bridge: wlan0<br />
netmask: 255.255.255.0<br />
broadcast: 192.168.100.255<br />
host_groups:<br />
master: 192.168.200.100<br />
minion1: 192.168.200.101<br />
minion2: 192.168.200.102<br />
minion3: 192.168.200.103<br />
</pre><br />
<br />
* In the Vagrant Kubernetes directory (i.e., <code>$HOME/dev/kubernetes</code>), run the following command:<br />
$ vagrant up<br />
<br />
===Setup hosts===<br />
''Note: Run the following commands/steps on all hosts (master and minions).''<br />
<br />
* Log into the k8s master host:<br />
$ vagrant ssh master<br />
<br />
* Kubernetes cluster<br />
$ cat << EOF >> /etc/hosts<br />
192.168.200.100 k8s.master.dev<br />
192.168.200.101 k8s.minion1.dev<br />
192.168.200.102 k8s.minion2.dev<br />
192.168.200.103 k8s.minion3.dev<br />
EOF<br />
<br />
* Install, enable, and start NTP:<br />
$ yum install -y ntp<br />
$ systemctl enable ntpd && systemctl start ntpd<br />
$ timedatectl<br />
<br />
* Disable any [[iptables|firewall rules]] (for now; we will add the rules back later):<br />
$ systemctl stop firewalld && systemctl disable firewalld<br />
$ systemctl stop iptables<br />
<br />
* Disable [[SELinux]] (for now; we will turn it on again later):<br />
$ setenforce 0<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/sysconfig/selinux<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
$ sestatus<br />
<br />
* Add the Docker repo and update yum:<br />
$ cat << EOF > /etc/yum.repos.d/virt7-docker-common-release.repo<br />
[virt7-docker-common-release]<br />
name=virr7-docker-common-release<br />
baseurl=<nowiki>http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/</nowiki><br />
gpgcheck=0<br />
EOF<br />
$ yum update<br />
<br />
* Install Docker, Kubernetes, and etcd:<br />
$ yum install -y --enablerepo=virt7-docker-common-release kubernetes docker etcd<br />
<br />
===Install and configure master controller===<br />
''Note: Run the following commands on only the master host.''<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/etcd/etcd.conf</code> and add (or make changes to) the following lines:<br />
[member]<br />
ETCD_LISTEN_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
[cluster]<br />
ETCD_ADVERTISE_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/apiserver</code> and add (or make changes to) the following lines:<br />
<pre><br />
# The address on the local server to listen to.<br />
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"<br />
KUBE_API_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port on the local server to listen on.<br />
KUBE_API_PORT="--port=8080"<br />
<br />
# Port minions listen on<br />
KUBELET_PORT="--kubelet-port=10250"<br />
<br />
# Comma separated list of nodes in the etcd cluster<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://127.0.0.1:2379</nowiki>"<br />
<br />
# Address range to use for services<br />
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"<br />
<br />
# default admission control policies<br />
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"<br />
<br />
# Add your own!<br />
KUBE_API_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following etcd and Kubernetes services:<br />
<br />
$ for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE <br />
done<br />
<br />
* Check on the status of the above services (the following command should report 4 running services):<br />
$ systemctl status etcd kube-apiserver kube-controller-manager kube-scheduler | grep "(running)" | wc -l # => 4<br />
<br />
* Check on the status of the Kubernetes API server:<br />
$ kubectl cluster-info<br />
Kubernetes master is running at <nowiki>http://localhost:8080</nowiki><br />
$ curl <nowiki>http://localhost:8080/version</nowiki><br />
#~OR~<br />
$ curl <nowiki>http://k8s.master.dev:8080/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "2",<br />
"gitVersion": "v1.2.0",<br />
"gitCommit": "ec7364b6e3b155e78086018aa644057edbe196e5",<br />
"gitTreeState": "clean"<br />
}<br />
</pre><br />
<br />
* Get a list of Kubernetes API paths:<br />
$ curl <nowiki>http://k8s.master.dev:8080/paths</nowiki><br />
<pre><br />
{<br />
"paths": [<br />
"/api",<br />
"/api/v1",<br />
"/apis",<br />
"/apis/autoscaling",<br />
"/apis/autoscaling/v1",<br />
"/apis/batch",<br />
"/apis/batch/v1",<br />
"/apis/extensions",<br />
"/apis/extensions/v1beta1",<br />
"/healthz",<br />
"/healthz/ping",<br />
"/logs/",<br />
"/metrics",<br />
"/resetMetrics",<br />
"/swagger-ui/",<br />
"/swaggerapi/",<br />
"/ui/",<br />
"/version"<br />
]<br />
}<br />
</pre><br />
<br />
* List all available paths (key-value stores) known to ectd:<br />
$ etcdctl ls / --recursive<br />
<br />
The master controller in a Kubernetes cluster must have the following services running to function as the master host in the cluster:<br />
* ntpd<br />
* etcd<br />
* kube-controller-manager<br />
* kube-apiserver<br />
* kube-scheduler<br />
<br />
Note: The Docker daemon should not be running on the master host.<br />
<br />
===Install and configure the minions===<br />
''Note: Run the following commands/steps on all minion hosts.''<br />
<br />
* Log into the k8s minion hosts:<br />
$ vagrant ssh minion1 # do the same for minion2 and minion3<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ECTD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/kubelet</code> and add (or make changes to) the following lines:<br />
<pre><br />
###<br />
# kubernetes kubelet (minion) config<br />
<br />
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)<br />
KUBELET_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port for the info server to serve on<br />
KUBELET_PORT="--port=10250"<br />
<br />
# You may leave this blank to use the actual hostname<br />
KUBELET_HOSTNAME="--hostname-override=k8s.minion1.dev" # ***CHANGE TO CORRECT MINION HOSTNAME***<br />
<br />
# location of the api-server<br />
KUBELET_API_SERVER="--api-servers=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
<br />
# pod infrastructure container<br />
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"<br />
<br />
# Add your own!<br />
KUBELET_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following services:<br />
$ for SERVICE in kube-proxy kubelet docker; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE<br />
done<br />
<br />
* Test that Docker is running and can start containers:<br />
$ docker info<br />
$ docker pull hello-world<br />
$ docker run hello-world<br />
<br />
Each minion in a Kubernetes cluster must have the following services running to function as a member of the cluster (i.e., a "Ready" node):<br />
* ntpd<br />
* kubelet<br />
* kube-proxy<br />
* docker<br />
<br />
===Kubectl: Exploring our environment===<br />
''Note: Run all of the following commands on the master host.''<br />
<br />
* Get a list of nodes with <code>kubectl</code>:<br />
$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 20m<br />
k8s.minion2.dev Ready 12m<br />
k8s.minion3.dev Ready 12m<br />
</pre><br />
<br />
* Describe nodes with <code>kubectl</code>:<br />
<br />
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'<br />
$ kubectl get nodes -o jsonpath='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' | tr ';' "\n"<br />
<pre><br />
k8s.minion1.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion2.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion3.dev:OutOfDisk=False<br />
Ready=True<br />
</pre><br />
<br />
* Get the man page for <code>kubectl</code>:<br />
$ man kubectl-get<br />
<br />
==Working with our Kubernetes cluster==<br />
<br />
''Note: The following section will be working from within the Kubernetes cluster we created above.''<br />
<br />
===Create and deploy pod definitions===<br />
<br />
* Turn off nodes 1 and 2:<br />
minion{1,2}$ systemctl stop kubelet kube-proxy<br />
<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 1h<br />
k8s.minion2.dev NotReady 37m<br />
k8s.minion3.dev NotReady 39m<br />
</pre><br />
<br />
* Check for any k8s Pods (there should be none):<br />
master$ kubectl get pods<br />
<br />
* Create a builds directory for our Pods:<br />
master$ mkdir builds && cd $_<br />
<br />
* Create a Pod running Nginx inside a Docker container:<br />
<pre><br />
master$ kubectl create -f - <<EOF<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
* Check on Pod creation status:<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 0/1 ContainerCreating 0 2s<br />
</pre><br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 1/1 Running 0 3m<br />
</pre><br />
<br />
minion1$ docker ps<br />
<pre><br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
a718c6c0355d nginx:1.7.9 "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx.4580025_nginx_default_699e...<br />
</pre><br />
<br />
master$ kubectl describe pod nginx<br />
<br />
master$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1<br />
busybox$ wget -qO- 172.17.0.2<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete pod nginx<br />
<br />
* Port forwarding:<br />
master$ kubectl create -f nginx.yml # see above for YAML<br />
master$ kubectl port-forward nginx :80 &<br />
I1020 23:12:29.478742 23394 portforward.go:213] Forwarding from [::1]:40065 -> 80<br />
master$ curl -I localhost:40065<br />
<br />
===Tags, labels, and selectors===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-pod-label.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-pod-label.yml<br />
master$ kubectl get pods -l app=nginx<br />
master$ kubectl describe pods -l app=nginx<br />
<br />
* Add labels or overwrite existing ones:<br />
master$ kubectl label pods nginx new-label=mynginx<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=nginx<br />
master$ kubectl label pods nginx new-label=foo<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=foo<br />
<br />
===Deployments===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-prod.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-prod<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-prod<br />
spec:<br />
containers:<br />
- name: nginx-deployment-prod<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create --validate -f nginx-deployment-dev.yml<br />
master$ kubectl create --validate -f nginx-deployment-prod.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 5m<br />
nginx-deployment-prod-3051195443-hj9b1 1/1 Running 0 12m<br />
</pre><br />
<br />
master$ kubectl describe deployments -l app=nginx-deployment-dev<br />
<pre><br />
Name: nginx-deployment-dev<br />
Namespace: default<br />
CreationTimestamp: Thu, 20 Oct 2016 23:48:46 +0000<br />
Labels: app=nginx-deployment-dev<br />
Selector: app=nginx-deployment-dev<br />
Replicas: 1 updated | 1 total | 1 available | 0 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-dev-2568522567 (1/1 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment-prod 1 1 1 1 44s<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev-update.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.8 # ***CHANGED***<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl apply -f nginx-deployment-dev-update.yml<br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 0/1 ContainerCreating 0 27s<br />
</pre><br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 6m<br />
</pre><br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment nginx-deployment-dev<br />
master$ kubectl delete deployment nginx-deployment-prod<br />
<br />
===Multi-Pod (container) replication controller===<br />
<br />
* Start the other two nodes (the ones we previously stopped):<br />
minion2$ systemctl start kubelet kube-proxy<br />
minion3$ systemctl start kubelet kube-proxy<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 2h<br />
k8s.minion2.dev Ready 2h<br />
k8s.minion3.dev Ready 2h<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-multi-node.yml<br />
---<br />
apiVersion: v1<br />
kind: ReplicationController<br />
metadata:<br />
name: nginx-www<br />
spec:<br />
replicas: 3<br />
selector:<br />
app: nginx<br />
template:<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-multi-node.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 0/1 ContainerCreating 0 10s<br />
nginx-www-416ct 0/1 ContainerCreating 0 10s<br />
nginx-www-ax41w 0/1 ContainerCreating 0 10s<br />
</pre><br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 1m<br />
nginx-www-416ct 1/1 Running 0 1m<br />
nginx-www-ax41w 1/1 Running 0 1m<br />
</pre><br />
<br />
master$ kubectl describe pods | awk '/^Node/{print $2}'<br />
<pre><br />
k8s.minion2.dev/192.168.200.102<br />
k8s.minion1.dev/192.168.200.101<br />
k8s.minion3.dev/192.168.200.103<br />
</pre><br />
<br />
minion1$ docker ps # 1 nginx container running<br />
minion2$ docker ps # 1 nginx container running<br />
minion3$ docker ps # 1 nginx container running<br />
minion3$ docker ps --format "<nowiki>{{.Image}}</nowiki>"<br />
<pre><br />
nginx<br />
gcr.io/google_containers/pause:2.0<br />
</pre><br />
<br />
master$ kubectl describe replicationcontroller<br />
<pre><br />
Name: nginx-www<br />
Namespace: default<br />
Image(s): nginx<br />
Selector: app=nginx<br />
Labels: app=nginx<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
...<br />
</pre><br />
<br />
* Attempt to delete one of the three pods:<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 11m<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
master$ kubectl delete pod nginx-www-2evxu<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-3cck4 1/1 Running 0 12s<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
<br />
A new pod (<code>nginx-www-3cck4</code>) automatically started up. This is because the expected state, as defined in our YAML file, is for there to be 3 pods running at all times. Thus, if one or more of the pods were to go down, a new pod (or pods) will automatically start up to bring the state back to the expected state.<br />
<br />
* To force-delete all pods:<br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Create and deploy service definitions===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-service.yml<br />
---<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: nginx-service<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
</pre><br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
</pre><br />
master$ kubectl create -f nginx-service.yml<br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
nginx-service 10.254.110.127 <none> 8000/TCP 10s<br />
</pre><br />
<br />
master$ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i<br />
busybox$ wget -qO- 10.254.110.127:8000 # works<br />
<br />
* Cleanup<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete service nginx-service<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-jh2e9 1/1 Running 0 13m<br />
nginx-www-jir2g 1/1 Running 0 13m<br />
nginx-www-w91uw 1/1 Running 0 13m<br />
</pre><br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Creating temporary Pods at the CLI===<br />
<br />
* Make sure we have no Pods running:<br />
master$ kubectl get pods<br />
<br />
* Create temporary deployment pod:<br />
master$ kubectl run mysample --image=foobar/apache<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
mysample-1424711890-fhtxb 0/1 ContainerCreating 0 1s<br />
</pre><br />
master$ kubectl get deployment <br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
mysample 1 1 1 0 7s<br />
</pre><br />
<br />
* Create a temporary deployment pod (where we know it will fail):<br />
master$ kubectl run myexample --image=christophchamp/ubuntu_sysadmin<br />
master$ kubectl -o wide get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myexample-3534121234-mpr35 0/1 CrashLoopBackOff 12 39m k8s.minion3.dev<br />
mysample-2812764540-74c5h 1/1 Running 0 41m k8s.minion2.dev<br />
</pre><br />
<br />
* Check on why the "myexample" pod is in status "CrashLoopBackOff":<br />
master$ kubectl describe pods/myexample-3534121234-mpr35<br />
master$ kubectl describe deployments/mysample<br />
master$ kubectl describe pods/mysample-2812764540-74c5h | awk '/^Node/{print $2}'<br />
k8s.minion2.dev/192.168.200.102<br />
<br />
master$ kubectl delete deployment mysample<br />
<br />
* Run multiple replicas of the same pod:<br />
master$ kubectl run myreplicas --image=latest123/apache --replicas=2 --labels=app=myapache,version=1.0.0<br />
master$ kubectl describe deployment myreplicas <br />
<pre><br />
Name: myreplicas<br />
Namespace: default<br />
CreationTimestamp: Fri, 21 Oct 2016 19:10:30 +0000<br />
Labels: app=myapache,version=1.0.0<br />
Selector: app=myapache,version=1.0.0<br />
Replicas: 2 updated | 2 total | 1 available | 1 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: myreplicas-2209834598 (2/2 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myreplicas-2209834598-5iyer 1/1 Running 0 1m k8s.minion1.dev<br />
myreplicas-2209834598-cslst 1/1 Running 0 1m k8s.minion2.dev<br />
</pre><br />
<br />
master$ kubectl describe pods -l version=1.0.0<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myreplicas<br />
<br />
===Interacting with Pod containers===<br />
<br />
* Create example Apache pod definition file:<br />
<pre><br />
master$ cat << EOF > apache.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: apache<br />
spec:<br />
containers:<br />
- name: apache<br />
image: latest123/apache<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl create -f apache.yml<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
<br />
* Test pod and make some basic configuration changes:<br />
master$ kubectl exec apache date<br />
master$ kubectl exec mypod -i -t -- cat /var/www/html/index.html # default apache HTML<br />
master$ kubectl exec apache -i -t -- /bin/bash<br />
container$ export TERM=xterm<br />
container$ echo "xtof test" > /var/www/html/index.html<br />
minion3$ curl 172.17.0.2<br />
xtof test<br />
container$ exit<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
Pod/container is still running even after we exited (as expected).<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Logs===<br />
<br />
* Start our example Apache pod to use for checking Kubernetes logging features:<br />
master$ kubectl create -f apache.yml <br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
apache 1/1 Running 0 9s<br />
</pre><br />
master$ kubectl logs apache<br />
<pre><br />
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message<br />
</pre><br />
master$ kubectl logs --tail=10 apache<br />
master$ kubectl logs --since=24h apache # or 10s, 2m, etc.<br />
master$ kubectl logs -f apache # follow the logs<br />
master$ kubectl logs -f -c apache apache # where -c is the container ID<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Autoscaling and scaling Pods===<br />
<br />
master$ kubectl run myautoscale --image=latest123/apache --port=80 --labels=app=myautoscale<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 47s k8s.minion3.dev<br />
</pre><br />
<br />
* Create an autoscale definition:<br />
master$ kubectl autoscale deployment myautoscale --min=2 --max=6 --cpu-percent=80<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 2 2 2 2 4m<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 3m k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4s k8s.minion2.dev<br />
</pre><br />
<br />
* Scale up an already autoscaled deployment:<br />
master$ kubectl scale --current-replicas=2 --replicas=4 deployment/myautoscale<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 4 4 4 4 8m<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-2rxhp 1/1 Running 0 8s k8s.minion1.dev<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 7m k8s.minion3.dev<br />
myautoscale-3243017378-ozxs8 1/1 Running 0 8s k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4m k8s.minion2.dev<br />
</pre><br />
<br />
* Scale down:<br />
master$ kubectl scale --current-replicas=4 --replicas=2 deployment/myautoscale<br />
<br />
Note: You can not scale down past the original minimum number of pods/containers specified in the original autoscale deployment (i.e., min=2 in our example).<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myautoscale<br />
<br />
===Failure and recovery===<br />
<br />
master$ kubectl run myrecovery --image=latest123/apache --port=80 --replicas=2 --labels=app=myrecovery<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myrecovery 2 2 2 2 6s<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-5xu8f 1/1 Running 0 12s k8s.minion1.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 12s k8s.minion2.dev<br />
</pre><br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the minions/nodes (so we have a total of 2 nodes online):<br />
minion1$ systemctl stop docker kubelet kube-proxy<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-qyi04 1/1 Running 0 7m k8s.minion3.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 14m k8s.minion2.dev<br />
</pre><br />
Pod switch from minion1 to minion3.<br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the remaining online minions/nodes (so we have a total of 1 node online):<br />
minion2$ systemctl stop docker kubelet kube-proxy<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-b5tim 1/1 Running 0 2m k8s.minion3.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 17m k8s.minion3.dev<br />
</pre><br />
Both Pods are now running on minion3, the only available node.<br />
<br />
* Start up Kubernetes- and Docker-related services again on minion1 and delete one of the Pods:<br />
minion1$ systemctl start docker kubelet kube-proxy<br />
master$ kubectl delete pod myrecovery-563119102-b5tim<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-8unzg 1/1 Running 0 1m k8s.minion1.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 20m k8s.minion3.dev<br />
</pre><br />
Pods are now running on separate nodes.<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployments/myrecovery<br />
<br />
==Minikube==<br />
[https://github.com/kubernetes/minikube Minikube] is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.<br />
<br />
* Install Minikube:<br />
$ curl -Lo minikube <nowiki>https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64</nowiki> \<br />
&& chmod +x minikube && sudo mv minikube /usr/local/bin/<br />
<br />
* Install kubectl<br />
$ curl -Lo kubectl <nowiki>https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl</nowiki> \<br />
&& chmod +x kubectl && sudo mv kubectl /usr/local/bin/<br />
<br />
* Test install<br />
$ minikube start<br />
#~OR~<br />
$ minikube start --memory 4096 # give it 4GB of RAM<br />
$ minikube status<br />
$ minikube dashboard<br />
$ kubectl config view<br />
$ kubectl cluster-info<br />
<br />
NOTE: If you have an old version of minikube installed, you should probably do the following before upgrading to a much newer version:<br />
$ minikube delete --all --purge<br />
<br />
Get the details on the CLI options for kubectl [https://kubernetes.io/docs/reference/kubectl/overview/ here].<br />
<br />
Using the <code>`kubectl proxy`</code> command, kubectl will authenticate with the API Server on the Master Node and would make the dashboard available on <nowiki>http://localhost:8001/ui</nowiki>:<br />
<br />
$ kubectl proxy<br />
Starting to serve on 127.0.0.1:8001<br />
<br />
After running the above command, we can access the dashboard at <code><nowiki>http://127.0.0.1:8001/ui</nowiki></code>.<br />
<br />
Once the kubectl proxy is configured, we can send requests to localhost on the proxy port:<br />
<br />
$ curl <nowiki>http://localhost:8001/</nowiki><br />
$ curl <nowiki>http://localhost:8001/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "8",<br />
"gitVersion": "v1.8.0",<br />
"gitCommit": "0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4",<br />
"gitTreeState": "clean",<br />
"buildDate": "2017-11-29T22:43:34Z",<br />
"goVersion": "go1.9.1",<br />
"compiler": "gc",<br />
"platform": "linux/amd64"<br />
}<br />
</pre><br />
<br />
Without kubectl proxy configured, we can get the Bearer Token using kubectl, and then send it with the API request. A Bearer Token is an access token which is generated by the authentication server (the API server on the Master Node) and given back to the client. Using that token, the client can connect back to the Kubernetes API server without providing further authentication details, and then, access resources.<br />
<br />
* Get the k8s token:<br />
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | awk '/^default/{print $1}') | awk '/^token/{print $2}')<br />
<br />
* Get the k8s API server endpoint:<br />
$ APISERVER=$(kubectl config view | awk '/https/{print $2}')<br />
<br />
* Access the API Server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}<br />
<br />
===Using Minikube as a local Docker registry===<br />
<br />
Sometimes it is useful to have a local Docker registry for Kubernetes to pull images from. As the Minikube [https://github.com/kubernetes/minikube/blob/0c616a6b42b28a1aab8397f5a9061f8ebbd9f3d9/README.md#reusing-the-docker-daemon README] describes, you can reuse the Docker daemon running within Minikube with <code>eval $(minikube docker-env)</code> to build and pull images from.<br />
<br />
To use an image without uploading it to some external resgistry (e.g., Docker Hub), you can follow these steps:<br />
* Set the environment variables with <code>eval $(minikube docker-env)</code><br />
* Build the image with the Docker daemon of Minikube (e.g., <code>docker build -t my-image .</code>)<br />
* Set the image in the pod spec like the build tag (e.g., <code>my-image</code>)<br />
* Set the <code>imagePullPolicy</code> to <code>Never</code>, otherwise Kubernetes will try to download the image.<br />
<br />
Important note: You have to run <code>eval $(minikube docker-env)</code> on each terminal you want to use since it only sets the environment variables for the current shell session.<br />
<br />
===Working with our Minikube-based Kubernetes cluster===<br />
<br />
;Kubernetes Object Model<br />
<br />
Kubernetes has a very rich object model, with which it represents different persistent entities in the Kubernetes cluster. Those entities describe:<br />
<br />
* What containerized applications we are running and on which node<br />
* Application resource consumption<br />
* Different policies attached to applications, like restart/upgrade policies, fault tolerance, etc.<br />
<br />
With each object, we declare our intent or desired state using the '''spec''' field. The Kubernetes system manages the '''status''' field for objects, in which it records the actual state of the object. At any given point in time, the Kubernetes Control Plane tries to match the object's actual state to the object's desired state.<br />
<br />
Examples of Kubernetes objects are Pods, Deployments, ReplicaSets, etc.<br />
<br />
To create an object, we need to provide the '''spec''' field to the Kubernetes API Server. The '''spec''' field describes the desired state, along with some basic information, like the name. The API request to create the object must have the '''spec''' field, as well as other details, in a JSON format. Most often, we provide an object's definition in a YAML file, which is converted by kubectl in a JSON payload and sent to the API Server.<br />
<br />
Below is an example of a ''Deployment'' object:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
With the '''apiVersion''' field in the example above, we mention the API endpoint on the API Server which we want to connect to. Note that you can see what API version to use with the following call to the API server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}/apis/apps<br />
Use the '''preferredVersion''' for most cases.<br />
<br />
With the '''kind''' field, we mention the object type &mdash; in our case, we have '''Deployment'''. With the '''metadata''' field, we attach the basic information to objects, like the name. Notice that in the above we have two '''spec''' fields ('''spec''' and '''spec.template.spec'''). With '''spec''', we define the desired state of the deployment. In our example, we want to make sure that, at any point in time, at least 3 ''Pods'' are running, which are created using the Pod template defined in '''spec.template'''. In '''spec.template.spec''', we define the desired state of the Pod (here, our Pod would be created using nginx:1.7.9).<br />
<br />
Once the object is created, the Kubernetes system attaches the '''status''' field to the object.<br />
<br />
;Connecting users to Pods<br />
<br />
To access the application, a user/client needs to connect to the Pods. As Pods are ephemeral in nature, resources like IP addresses allocated to it cannot be static. Pods could die abruptly or be rescheduled based on existing requirements.<br />
<br />
As an example, consider a scenario in which a user/client is connecting to a Pod using its IP address. Unexpectedly, the Pod to which the user/client is connected dies and a new Pod is created by the controller. The new Pod will have a new IP address, which will not be known automatically to the user/client of the earlier Pod. To overcome this situation, Kubernetes provides a higher-level abstraction called ''[https://kubernetes.io/docs/concepts/services-networking/service/ Service]'', which logically groups Pods and a policy to access them. This grouping is achieved via Labels and Selectors (see above).<br />
<br />
So, for our example, we would use Selectors (e.g., "<code>app==frontend</code>" and "<code>app==db</code>") to group our Pods into two logical groups. We can assign a name to the logical grouping, referred to as a "service name". In our example, we have created two Services, <code>frontend-svc</code> and <code>db-svc</code>, and they have the "<code>app==frontend</code>" and the "<code>app==db</code>" Selectors, respectively.<br />
<br />
The following is an example of a Service object:<br />
<pre><br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: frontend-svc<br />
spec:<br />
selector:<br />
app: frontend<br />
ports:<br />
- protocol: TCP<br />
port: 80<br />
targetPort: 5000<br />
</pre><br />
<br />
in which we are creating a <code>frontend-svc</code> Service by selecting all the Pods that have the Label "<code>app</code>" equal to "<code>frontend</code>". By default, each Service also gets an IP address, which is routable only inside the cluster. In our case, we have 172.17.0.4 and 172.17.0.5 IP addresses for our <code>frontend-svc</code> and <code>db-svc</code> Services, respectively. The IP address attached to each Service is also known as the ClusterIP for that Service.<br />
<br />
+------------------------------------+<br />
| select: app==frontend | container (app:frontend; 10.0.1.3)<br />
| service=frontend-svc (172.17.0.4) |------> container (app:frontend; 10.0.1.4)<br />
+------------------------------------+ container (app:frontend; 10.0.1.5)<br />
^<br />
/<br />
/<br />
user/client<br />
\<br />
\<br />
v<br />
+------------------------------------+<br />
| select: app==db |------> container (app:db; 10.0.1.10)<br />
| service=db-svc (172.17.0.5) |<br />
+------------------------------------+<br />
<br />
The user/client now connects to a Service via ''its'' IP address, which forwards the traffic to one of the Pods attached to it. A Service does the load balancing while selecting the Pods for forwarding the data/traffic.<br />
<br />
While forwarding the traffic from the Service, we can select the target port on the Pod. In our example, for <code>frontend-svc</code>, we will receive requests from the user/client on port 80. We will then forward these requests to one of the attached Pods on port 5000. If the target port is not defined explicitly, then traffic will be forwarded to Pods on the port on which the Service receives traffic.<br />
<br />
A tuple of Pods, IP addresses, along with the <code>targetPort</code> is referred to as a ''Service Endpoint''. In our case, <code>frontend-svc</code> has 3 Endpoints: <code>10.0.1.3:5000</code>, <code>10.0.1.4:5000</code>, and <code>10.0.1.5:5000</code>.<br />
<br />
===kube-proxy===<br />
All of the Worker Nodes run a daemon called kube-proxy, which watches the API Server on the Master Node for the addition and removal of Services and endpoints. For each new Service, on each node, kube-proxy configures the IPtables rules to capture the traffic for its ClusterIP and forwards it to one of the endpoints. When the Service is removed, kube-proxy removes the IPtables rules on all nodes as well.<br />
<br />
===Service discovery===<br />
As Services are the primary mode of communication in Kubernetes, we need a way to discover them at runtime. Kubernetes supports two methods of discovering a Service:<br />
<br />
;Environment Variables : As soon as the Pod starts on any Worker Node, the kubelet daemon running on that node adds a set of environment variables in the Pod for all active Services. For example, if we have an active Service called <code>redis-master</code>, which exposes port 6379, and its ClusterIP is 172.17.0.6, then, on a newly created Pod, we can see the following environment variables:<br />
<br />
REDIS_MASTER_SERVICE_HOST=172.17.0.6<br />
REDIS_MASTER_SERVICE_PORT=6379<br />
REDIS_MASTER_PORT=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp<br />
REDIS_MASTER_PORT_6379_TCP_PORT=6379<br />
REDIS_MASTER_PORT_6379_TCP_ADDR=172.17.0.6<br />
<br />
With this solution, we need to be careful while ordering our Services, as the Pods will not have the environment variables set for Services which are created after the Pods are created.<br />
<br />
;DNS : Kubernetes has an add-on for DNS, which creates a DNS record for each Service and its format is like <code>my-svc.my-namespace.svc.cluster.local</code>. Services within the same namespace can reach other services with just their name. For example, if we add a Service <code>redis-master</code> in the <code>my-ns</code> Namespace, then all the Pods in the same Namespace can reach to the redis Service just by using its name, <code>redis-master</code>. Pods from other Namespaces can reach the Service by adding the respective Namespace as a suffix, like <code>redis-master.my-ns</code>.<br />
: This is the most common and highly recommended solution. For example, in the previous section's image, we have seen that an internal DNS is configured, which maps our services <code>frontend-svc</code> and <code>db-svc</code> to 172.17.0.4 and 172.17.0.5, respectively.<br />
<br />
===Service Type===<br />
While defining a Service, we can also choose its access scope. We can decide whether the Service:<br />
<br />
* is only accessible within the cluster;<br />
* is accessible from within the cluster and the external world; or<br />
* maps to an external entity which resides outside the cluster.<br />
<br />
Access scope is decided by ''ServiceType'', which can be mentioned when creating the Service.<br />
<br />
;ClusterIP : (the default ''ServiceType''.) A Service gets its Virtual IP address using the ClusterIP. That IP address is used for communicating with the Service and is accessible only within the cluster. <br />
<br />
;NodePort : With this ''ServiceType'', in addition to creating a ClusterIP, a port from the range '''30000-32767''' is mapped to the respective service from all the Worker Nodes. For example, if the mapped NodePort is 32233 for the service <code>frontend-svc</code>, then, if we connect to any Worker Node on port 32233, the node would redirect all the traffic to the assigned ClusterIP (172.17.0.4).<br />
: By default, while exposing a NodePort, a random port is automatically selected by the Kubernetes Master from the port range '''30000-32767'''. If we do not want to assign a dynamic port value for NodePort, then, while creating the Service, we can also give a port number from the earlier specific range.<br />
: The NodePort ServiceType is useful when we want to make our services accessible from the external world. The end-user connects to the Worker Nodes on the specified port, which forwards the traffic to the applications running inside the cluster. To access the application from the external world, administrators can configure a reverse proxy outside the Kubernetes cluster and map the specific endpoint to the respective port on the Worker Nodes.<br />
<br />
;LoadBalancer: With this ''ServiceType'', we have the following:<br />
:* NodePort and ClusterIP Services are automatically created, and the external load balancer will route to them;<br />
:* The Services are exposed at a static port on each Worker Node; and<br />
:* The Service is exposed externally using the underlying Cloud provider's load balancer feature.<br />
: The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS.<br />
<br />
;ExternalIP : A Service can be mapped to an ExternalIP address if it can route to one or more of the Worker Nodes. Traffic that is ingressed into the cluster with the ExternalIP (as destination IP) on the Service port, gets routed to one of the the Service endpoints. (Note that ExternalIPs are not managed by Kubernetes. The cluster administrator(s) must have configured the routing to map the ExternalIP address to one of the nodes.)<br />
<br />
;ExternalName : a special ''ServiceType'', which has no Selectors and does not define any endpoints. When accessed within the cluster, it returns a CNAME record of an externally configured service.<br />
: The primary use case of this ServiceType is to make externally configured services like <code>my-database.example.com</code> available inside the cluster, using just the name, like <code>my-database</code>, to other services inside the same Namespace.<br />
<br />
===Deploying a application===<br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
EOF<br />
</pre><br />
<br />
$ kubectl get service<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h<br />
web-service NodePort 10.104.107.132 <none> 80:32610/TCP 7m<br />
<br />
Note that "<code>32610</code>" port.<br />
<br />
* Get the IP address of your Minikube k8s cluster<br />
$ minikube ip<br />
192.168.99.100<br />
#~OR~<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
* Now, check that your web service is serving up a default Nginx website:<br />
$ curl -I <nowiki>http://192.168.99.100:32610</nowiki><br />
HTTP/1.1 200 OK<br />
Server: nginx/1.13.8<br />
Date: Thu, 11 Jan 2018 00:27:51 GMT<br />
Content-Type: text/html<br />
Content-Length: 612<br />
Last-Modified: Wed, 10 Jan 2018 04:10:03 GMT<br />
Connection: keep-alive<br />
ETag: "5a55921b-264"<br />
Accept-Ranges: bytes<br />
<br />
Looks good!<br />
<br />
Finally, destroy the webserver deployment:<br />
$ kubectl delete deployments webserver<br />
<br />
===Using Ingress with Minikube===<br />
<br />
* First check that the Ingress add-on is enabled:<br />
$ minikube addons list | grep ingress<br />
- ingress: disabled<br />
<br />
If it is not, enable it with:<br />
$ minikube addons enable ingress<br />
$ minikube addons list | grep ingress<br />
- ingress: enabled<br />
<br />
* Create an Echo Server Deployment:<br />
<pre><br />
$ cat << EOF >deploy-echoserver.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: echoserver<br />
template:<br />
metadata:<br />
labels:<br />
run: echoserver<br />
spec:<br />
containers:<br />
- image: gcr.io/google_containers/echoserver:1.4<br />
imagePullPolicy: IfNotPresent<br />
name: echoserver<br />
ports:<br />
- containerPort: 8080<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-echoserver.yml<br />
<br />
* Create the Cheddar cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-cheddar-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: cheddar-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:cheddar<br />
imagePullPolicy: IfNotPresent<br />
name: cheddar-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-stilton-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: stilton-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:stilton<br />
imagePullPolicy: IfNotPresent<br />
name: stilton-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create the Echo Server Service:<br />
<pre><br />
$ cat << EOF >svc-echoserver.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 31116<br />
port: 8080<br />
protocol: TCP<br />
targetPort: 8080<br />
selector:<br />
run: echoserver<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-echoserver.yml<br />
<br />
* Create the Cheddar cheese Service:<br />
<pre><br />
$ cat << EOF >svc-cheddar-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 32467<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: cheddar-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
</pre><br />
$ kubectl create --validate -f svc-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Service:<br />
<pre><br />
$ cat << EOF >svc-stilton-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 30197<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: stilton-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-stilton-cheese.yml<br />
<br />
* Create the Ingress for the above Services:<br />
<pre><br />
$ cat << EOF >ingress-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: ingress-cheese<br />
annotations:<br />
nginx.ingress.kubernetes.io/rewrite-target: /<br />
spec:<br />
backend:<br />
serviceName: default-http-backend<br />
servicePort: 80<br />
rules:<br />
- host: myminikube.info<br />
http:<br />
paths:<br />
- path: /<br />
backend:<br />
serviceName: echoserver<br />
servicePort: 8080<br />
- host: cheeses.all<br />
http:<br />
paths:<br />
- path: /stilton<br />
backend:<br />
serviceName: stilton-cheese<br />
servicePort: 80<br />
- path: /cheddar<br />
backend:<br />
serviceName: cheddar-cheese<br />
servicePort: 80<br />
</pre><br />
$ kubectl create --validate -f ingress-cheese.yml<br />
<br />
* Check that everything is up:<br />
<pre><br />
$ kubectl get all<br />
NAME READY STATUS RESTARTS AGE<br />
pod/cheddar-cheese-d6d6587c7-4bgcz 1/1 Running 0 12m<br />
pod/echoserver-55f97d5bff-pdv65 1/1 Running 0 12m<br />
pod/stilton-cheese-6d64cbc79-g7h4w 1/1 Running 0 12m<br />
<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
service/cheddar-cheese NodePort 10.109.238.92 <none> 80:32467/TCP 12m<br />
service/echoserver NodePort 10.98.60.194 <none> 8080:31116/TCP 12m<br />
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h<br />
service/stilton-cheese NodePort 10.108.175.207 <none> 80:30197/TCP 12m<br />
<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
deployment.apps/cheddar-cheese 1 1 1 1 12m<br />
deployment.apps/echoserver 1 1 1 1 12m<br />
deployment.apps/stilton-cheese 1 1 1 1 12m<br />
<br />
NAME DESIRED CURRENT READY AGE<br />
replicaset.apps/cheddar-cheese-d6d6587c7 1 1 1 12m<br />
replicaset.apps/echoserver-55f97d5bff 1 1 1 12m<br />
replicaset.apps/stilton-cheese-6d64cbc79 1 1 1 12m<br />
<br />
$ kubectl get ing<br />
NAME HOSTS ADDRESS PORTS AGE<br />
ingress-cheese myminikube.info,cheeses.all 10.0.2.15 80 12m<br />
</pre><br />
<br />
* Add your host aliases:<br />
$ echo "$(minikube ip) myminikube.info cheeses.all" | sudo tee -a /etc/hosts<br />
<br />
* Now, either using your browser or [[curl]], check that you can reach all of the endpoints defined in the Ingress:<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/cheddar/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/stilton/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null myminikube.info # Should return '200'<br />
<br />
* You can also see the Nginx logs for the above requests with:<br />
$ kubectl --namespace kube-system logs \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller<br />
<br />
* You can also view the Nginx configuration file (and the settings created by the above Ingress) with:<br />
$ NGINX_POD=$(kubectl --namespace kube-system get pods \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller \<br />
--output jsonpath='{.items[0].metadata.name}')<br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- cat /etc/nginx/nginx.conf<br />
<br />
* Get the version of the Nginx Ingress controller installed:<br />
<pre><br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- /nginx-ingress-controller --version<br />
-------------------------------------------------------------------------------<br />
NGINX Ingress controller<br />
Release: 0.19.0<br />
Build: git-05025d6<br />
Repository: https://github.com/kubernetes/ingress-nginx.git<br />
-------------------------------------------------------------------------------<br />
</pre><br />
<br />
==Kubectl==<br />
<br />
<code>kubectl</code> controls the Kubernetes cluster manager.<br />
<br />
* View your current configuration:<br />
$ kubectl config view<br />
<br />
* Switch between clusters:<br />
$ kubectl config use-context <context_name><br />
<br />
* Remove a cluster:<br />
$ kubectl config unset contexts.<context_name><br />
$ kubectl config unset users.<user_name><br />
$ kubectl config unset clusters.<cluster_name><br />
<br />
* Sort Pods by age:<br />
$ kubectl get po --sort-by='{.firstTimestamp}'.<br />
$ kubectl get pods --all-namespaces --sort-by=.metadata.creationTimestamp<br />
<br />
* Backup all primitives deployed in a given k8s cluster:<br />
<pre><br />
$ kubectl api-resources --verbs=list --namespaced -o name \<br />
| xargs -n1 -I{} bash -c "kubectl get {} --all-namespaces -oyaml && echo ---" \<br />
> k8s_backup.yaml<br />
</pre><br />
<br />
===kubectl explain===<br />
<br />
;List the fields for supported resources.<br />
<br />
* Get the documentation of a resource (aka "kind") and its fields:<br />
<pre><br />
$ kubectl explain deployment<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
APIVersion defines the versioned schema of this representation of an<br />
object. Servers should convert recognized schemas to the latest internal<br />
value, and may reject unrecognized values. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources<br />
<br />
kind <string><br />
Kind is a string value representing the REST resource this object<br />
represents. Servers may infer this from the endpoint the client submits<br />
requests to. Cannot be updated. In CamelCase. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds<br />
<br />
metadata <Object><br />
Standard object metadata.<br />
<br />
spec <Object><br />
Specification of the desired behavior of the Deployment.<br />
<br />
status <Object><br />
Most recently observed status of the Deployment<br />
</pre><br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
</pre><br />
<br />
* Get a list of ''all'' allowable fields for a given primitive:<br />
<pre><br />
$ kubectl explain deployment --recursive | head<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
kind <string><br />
metadata <Object><br />
</pre><br />
<br />
* Get documentation ("man page"-style) for a given field in a given primitive:<br />
<pre><br />
$ kubectl explain deployment.status.availableReplicas<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
FIELD: availableReplicas <integer><br />
<br />
DESCRIPTION:<br />
Total number of available pods (ready for at least minReadySeconds)<br />
targeted by this deployment.<br />
</pre><br />
<br />
===Merge kubeconfig files===<br />
<br />
* Reference which kubeconfig files you wish to merge:<br />
$ export KUBECONFIG=$HOME/.kube/dev.yaml:$HOME/.kube/prod.yaml<br />
<br />
* Flatten them:<br />
$ kubectl config view --flatten >> $HOME/.kube/config<br />
<br />
* Unset:<br />
$ unset KUBECONFIG<br />
<br />
Merge complete.<br />
<br />
==Namespaces==<br />
<br />
See: [https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces] in the official documentation.<br />
<br />
; Create a Namespace<br />
<br />
<pre><br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: dev<br />
</pre><br />
<br />
==Pods==<br />
<br />
; Create a Pod that has an Init Container<br />
<br />
In this example, I will create a Pod that has one application Container and one Init Container. The init container runs to completion before the application container starts.<br />
<br />
<pre><br />
$ cat << EOF >init-demo.yml<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: init-demo<br />
labels:<br />
app: demo<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: /usr/share/nginx/html<br />
# These containers are run during pod initialization<br />
initContainers:<br />
- name: install<br />
image: busybox<br />
command:<br />
- wget<br />
- "-O"<br />
- "/work-dir/index.html"<br />
- https://example.com<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: "/work-dir"<br />
dnsPolicy: Default<br />
volumes:<br />
- name: workdir<br />
emptyDir: {}<br />
EOF<br />
</pre><br />
<br />
The above Pod YAML will first create the init container using the busybox image, which will download the HTML of the example.com website and save it to a file (<code>index.html</code>) on the Pod volume called "workdir". After the init container completes, the Nginx container starts and presents the <code>index.html</code> on port 80 (the file is located at <code>/usr/share/nginx/index.html</code> inside the Nginx container as a volume mount).<br />
<br />
* Now, create this Pod:<br />
$ kubectl create --validate -f init-demo.yml<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat << EOF >example.yml<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: example<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: demo<br />
</pre><br />
<br />
* Check that we can get the header of <nowiki>https://example.com</nowiki>:<br />
$ curl -sI $(kubectl get svc/foo-svc -o jsonpath='{.spec.clusterIP}'):8000 | grep ^HTTP<br />
HTTP/1.1 200 OK<br />
<br />
==Deployments==<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' controller provides declarative updates for Pods and ReplicaSets.<br />
<br />
You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.<br />
<br />
; Creating a Deployment<br />
<br />
The following is an example of a Deployment. It creates a ReplicaSet to bring up three [https://hub.docker.com/_/nginx/ Nginx] Pods:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Check the syntax of the Deployment (YAML):<br />
$ kubectl create -f nginx-deployment.yml --dry-run<br />
deployment.apps/nginx-deployment created (dry run)<br />
<br />
* Create the Deployment:<br />
$ kubectl create --record -f nginx-deployment.yml <br />
deployment "nginx-deployment" created<br />
Note: By appending <code>--record</code> to the above command, we are telling the API to record the current command in the annotations of the created or updated resource. This is useful for future review, such as investigating which commands were executed in each Deployment revision.<br />
<br />
* Get information about our Deployment:<br />
$ kubectl get deployments<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 24s<br />
<br />
$ kubectl describe deployment/nginx-deployment<br />
<pre><br />
Name: nginx-deployment<br />
Namespace: default<br />
CreationTimestamp: Tue, 30 Jan 2018 23:28:43 +0000<br />
Labels: app=nginx<br />
Annotations: deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Selector: app=nginx<br />
Replicas: 3 desired | 3 updated | 3 total | 0 available | 3 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 25% max unavailable, 25% max surge<br />
Pod Template:<br />
Labels: app=nginx<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Conditions:<br />
Type Status Reason<br />
---- ------ ------<br />
Available False MinimumReplicasUnavailable<br />
Progressing True ReplicaSetUpdated<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-6c54bd5869 (3/3 replicas created)<br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal ScalingReplicaSet 28s deployment-controller Scaled up replica set nginx-deployment-6c54bd5869 to 3<br />
</pre><br />
<br />
* Get information about the ReplicaSet created by the above Deployment:<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-6c54bd5869 3 3 3 3m<br />
<br />
$ kubectl describe rs/nginx-deployment-6c54bd5869<br />
<pre><br />
Name: nginx-deployment-6c54bd5869<br />
Namespace: default<br />
Selector: app=nginx,pod-template-hash=2710681425<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Annotations: deployment.kubernetes.io/desired-replicas=3<br />
deployment.kubernetes.io/max-replicas=4<br />
deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Controlled By: Deployment/nginx-deployment<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-k9mh4<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-pphjt<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-n4fj5<br />
</pre><br />
<br />
* Get information about the Pods created by this Deployment:<br />
$ kubectl get pods --show-labels -l app=nginx -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE LABELS<br />
nginx-deployment-6c54bd5869-k9mh4 1/1 Running 0 5m 10.244.1.5 k8s.worker1.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-n4fj5 1/1 Running 0 5m 10.244.1.6 k8s.worker2.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-pphjt 1/1 Running 0 5m 10.244.1.7 k8s.worker3.local app=nginx,pod-template-hash=2710681425<br />
<br />
;Updating a Deployment<br />
<br />
Note: A Deployment's rollout is triggered if, and only if, the Deployment's pod template (that is, <code>.spec.template</code>) is changed (for example, if the labels or container images of the template are updated). Other updates, such as scaling the Deployment, do not trigger a rollout.<br />
<br />
Suppose that we want to update the Nginx Pods in the above Deployment to use the <code>nginx:1.9.1</code> image instead of the <code>nginx:1.7.9</code> image.<br />
<br />
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
deployment "nginx-deployment" image updated<br />
<br />
Alternatively, we can edit the Deployment and change <code>.spec.template.spec.containers[0].image</code> from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>:<br />
<br />
$ kubectl edit deployment/nginx-deployment<br />
deployment "nginx-deployment" edited<br />
<br />
* Check on the rollout status:<br />
<pre><br />
$ kubectl rollout status deployment/nginx-deployment<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
deployment "nginx-deployment" successfully rolled out<br />
</pre><br />
<br />
* Get information about the updated Deployment:<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 18m<br />
<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-5964dfd755 3 3 3 1m # <- new ReplicaSet using nginx:1.9.1<br />
nginx-deployment-6c54bd5869 0 0 0 17m # <- old ReplicaSet using nginx:1.7.9<br />
<br />
$ kubectl rollout history deployment/nginx-deployment<br />
deployments "nginx-deployment"<br />
REVISION CHANGE-CAUSE<br />
1 kubectl create --record=true --filename=nginx-deployment.yml<br />
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
<br />
$ kubectl rollout history deployment/nginx-deployment --revision=2<br />
<br />
deployments "nginx-deployment" with revision #2<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=1520898311<br />
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
Containers:<br />
nginx:<br />
Image: nginx:1.9.1<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
<br />
; Rolling back to a previous revision<br />
<br />
Undo the current rollout and rollback to the previous revision:<br />
$ kubectl rollout undo deployment/nginx-deployment<br />
deployment "nginx-deployment" rolled back<br />
<br />
Alternatively, you can rollback to a specific revision by specify that in --to-revision:<br />
$ kubectl rollout undo deployment/nginx-deployment --to-revision=1<br />
deployment "nginx-deployment" rolled back<br />
<br />
==Volume management==<br />
On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. First, when a container crashes, kubelet will restart it, but the files will be lost (i.e., the container starts with a clean state). Second, when running containers together in a Pod it is often necessary to share files between those containers. The Kubernetes ''[https://kubernetes.io/docs/concepts/storage/volumes/ Volumes]'' abstraction solves both of these problems. A Volume is essentially a directory backed by a storage medium. The storage medium and its content are determined by the Volume Type.<br />
<br />
In Kubernetes, a Volume is attached to a Pod and shared among the containers of that Pod. The Volume has the same life span as the Pod, and it outlives the containers of the Pod &mdash; this allows data to be preserved across container restarts.<br />
<br />
Kubernetes resolves the problem of persistent storage with the Persistent Volume subsystem, which provides APIs for users and administrators to manage and consume storage. To manage the Volume, it uses the PersistentVolume (PV) API resource type, and to consume it, it uses the PersistentVolumeClaim (PVC) API resource type.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes PersistentVolume] (PV) : a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims PersistentVolumeClaim] (PVC) : a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Persistent Volume Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).<br />
<br />
A Persistent Volume is a network-attached storage in the cluster, which is provisioned by the administrator.<br />
<br />
Persistent Volumes can be provisioned statically by the administrator, or dynamically, based on the StorageClass resource. A StorageClass contains pre-defined provisioners and parameters to create a Persistent Volume.<br />
<br />
A PersistentVolumeClaim (PVC) is a request for storage by a user. Users request Persistent Volume resources based on size, access modes, etc. Once a suitable Persistent Volume is found, it is bound to a Persistent Volume Claim. After a successful bind, the Persistent Volume Claim resource can be used in a Pod. Once a user finishes its work, the attached Persistent Volumes can be released. The underlying Persistent Volumes can then be reclaimed and recycled for future usage. See [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims Persistent Volumes] for details.<br />
<br />
;Access Modes<br />
* Each of the following access modes ''must'' be supported by storage resource provider (e.g., NFS, AWS EBS, etc.) if they are to be used.<br />
* ReadWriteOnce (RWO) &mdash; volume can be mounted as read/write by one node only.<br />
* ReadOnlyMany (ROX) &mdash; volume can be mounted read-only by many nodes.<br />
* ReadWriteMany (RWX) &mdash; volume can be mounted read/write by many nodes.<br />
A volume can only be mounted using one access mode at a time, regardless of the modes that are supported.<br />
<br />
; Example #1 - Using Host Volumes<br />
As an example of how to use volumes, we can modify our previous "webserver" Deployment (see above) to look like the following:<br />
<br />
$ cat webserver.yml<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: hostvol<br />
mountPath: /usr/share/nginx/html<br />
volumes:<br />
- name: hostvol<br />
hostPath:<br />
path: /home/docker/vol<br />
</pre><br />
<br />
And use the same Service:<br />
$ cat webserver-svc.yml<br />
<pre><br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
</pre><br />
<br />
Then create the deployment and service:<br />
$ kubectl create -f webserver.yml<br />
$ kubectl create -f webserver-svc.yml<br />
<br />
Then, SSH into the webserver and run the following commands<br />
$ minikube ssh<br />
minikube> mkdir -p /home/docker/vol<br />
minikube> echo "Christoph testing" > /home/docker/vol/index.html<br />
minikube> exit<br />
<br />
Get the webserver IP and port:<br />
$ minikube ip<br />
192.168.99.100<br />
$ kubectl get svc/web-service -o json | jq '.spec.ports[].nodePort'<br />
32610<br />
# OR<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
$ curl <nowiki>http://192.168.99.100:32610</nowiki><br />
Christoph testing<br />
<br />
; Example #2 - Using NFS<br />
<br />
* First, create a server to host your NFS server (e.g., <code>`sudo apt-get install -y nfs-kernel-server`</code>).<br />
* On your NFS server, do the following:<br />
$ mkdir -p /var/nfs/general<br />
$ cat << EOF >>/etc/exports<br />
/var/nfs/general 10.100.1.2(rw,sync,no_subtree_check) 10.100.1.3(rw,sync,no_subtree_check) 10.100.1.4(rw,sync,no_subtree_check)<br />
EOF<br />
where the <code>10.x</code> IPs are the private IPs of your k8s nodes (both Master and Worker nodes).<br />
* Make sure to install <code>nfs-common</code> on each of the k8s nodes that will be connecting to the NFS server.<br />
<br />
Now, on the k8s Master node, create a Persistent Volume (PV) and Persistent Volume Claim (PVC):<br />
<br />
* Create a Persistent Volume (PV):<br />
$ cat << EOF >pv.yml<br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 10.100.1.10 # NFS Server's private IP<br />
readOnly: false<br />
EOF<br />
$ kubectl create --validate -f pv.yml<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Available<br />
* Create a Persistent Volume Claim (PVC):<br />
$ cat << EOF >pvc.yml<br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
EOF<br />
$ kubectl create --validate -f pvc.yml<br />
$ kubectl get pvc<br />
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE<br />
nfs-pvc Bound mypv 1Gi RWX<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Bound default/nfs-pvc 11m<br />
<br />
* Create a Pod:<br />
$ cat << EOF >nfs-pod.yml <br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nfs-pod<br />
labels:<br />
name: nfs-pod<br />
spec:<br />
containers:<br />
- name: nfs-ctn<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
volumeMounts:<br />
- name: nfsvol<br />
mountPath: /tmp<br />
restartPolicy: Always<br />
securityContext:<br />
fsGroup: 65534<br />
runAsUser: 65534<br />
volumes:<br />
- name: nfsvol<br />
persistentVolumeClaim:<br />
claimName: nfs-pvc<br />
EOF<br />
$ kubectl create --validate -f nfs-pod.yml<br />
$ kubectl get pods -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE<br />
busybox 1/1 Running 9 2d 10.244.2.22 k8s.worker01.local<br />
<br />
* Get a shell from the <code>nfs-pod</code> Pod:<br />
$ kubectl exec -it nfs-pod -- sh<br />
/ $ df -h<br />
Filesystem Size Used Available Use% Mounted on<br />
172.31.119.58:/var/nfs/general<br />
19.3G 1.8G 17.5G 9% /tmp<br />
...<br />
/ $ touch /tmp/this-is-from-the-pod<br />
<br />
* On the NFS server:<br />
$ ls -l /var/nfs/general/<br />
total 0<br />
-rw-r--r-- 1 nobody nogroup 0 Jan 18 23:32 this-is-from-the-pod<br />
<br />
It works!<br />
<br />
==ConfigMaps and Secrets==<br />
While deploying an application, we may need to pass such runtime parameters like configuration details, passwords, etc. For example, let's assume we need to deploy ten different applications for our customers, and, for each customer, we just need to change the name of the company in the UI. Instead of creating ten different Docker images for each customer, we can just use the template image and pass the customers' names as a runtime parameter. In such cases, we can use the ConfigMap API resource. Similarly, when we want to pass sensitive information, we can use the Secret API resource. Think ''Secrets'' (for confidential data) and ''ConfigMaps'' (for non-confidential data).<br />
<br />
[https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/ ConfigMaps] allow you to decouple configuration artifacts from image content to keep containerized applications portable. Using ConfigMaps, we can pass configuration details as key-value pairs, which can be later consumed by Pods or any other system components, such as controllers. We can create ConfigMaps in two ways:<br />
<br />
* From literal values; and<br />
* From files.<br />
<br />
<br />
;ConfigMaps<br />
<br />
* Create a ConfigMap:<br />
$ kubectl create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2<br />
configmap "my-config" created<br />
$ kubectl get configmaps my-config -o yaml<br />
<pre><br />
apiVersion: v1<br />
data:<br />
key1: value1<br />
key2: value2<br />
kind: ConfigMap<br />
metadata:<br />
creationTimestamp: 2018-01-11T23:57:44Z<br />
name: my-config<br />
namespace: default<br />
resourceVersion: "117110"<br />
selfLink: /api/v1/namespaces/default/configmaps/my-config<br />
uid: 37a43e39-f72b-11e7-8370-08002721601f<br />
</pre><br />
$ kubectl describe configmap/my-config<br />
<pre><br />
Name: my-config<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Data<br />
====<br />
key2:<br />
----<br />
value2<br />
key1:<br />
----<br />
value1<br />
Events: <none><br />
</pre><br />
<br />
; Create a ConfigMap from a configuration file<br />
<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: ConfigMap<br />
metadata:<br />
name: customer1<br />
data:<br />
TEXT1: Customer1_Company<br />
TEXT2: Welcomes You<br />
COMPANY: Customer1 Company Technology, LLC.<br />
EOF<br />
</pre><br />
<br />
We can get the values of the given key as environment variables inside a Pod. In the following example, while creating the Deployment, we are assigning values for environment variables from the customer1 ConfigMap:<br />
<pre><br />
....<br />
containers:<br />
- name: my-app<br />
image: foobar<br />
env:<br />
- name: MONGODB_HOST<br />
value: mongodb<br />
- name: TEXT1<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT1<br />
- name: TEXT2<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT2<br />
- name: COMPANY<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: COMPANY<br />
....<br />
</pre><br />
With the above, we will get the <code>TEXT1</code> environment variable set to <code>Customer1_Company</code>, <code>TEXT2</code> environment variable set to <code>Welcomes You</code>, and so on.<br />
<br />
We can also mount a ConfigMap as a Volume inside a Pod. For each key, we will see a file in the mount path and the content of that file become the respective key's value. For details, see [https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#adding-configmap-data-to-a-volume here].<br />
<br />
You can also use ConfigMaps to configure your cluster to use, as an example, 8.8.8.8 and 8.8.4.4 as its upstream DNS server:<br />
<pre><br />
kind: ConfigMap<br />
apiVersion: v1<br />
metadata:<br />
name: kube-dns<br />
namespace: kube-system<br />
data:<br />
upstreamNameservers: |<br />
["8.8.8.8", "8.8.4.4"]<br />
</pre><br />
<br />
; Secrets<br />
<br />
Objects of type [https://kubernetes.io/docs/concepts/configuration/secret/ Secret] are intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a Secret is safer and more flexible than putting it verbatim in a pod definition or in a docker image.<br />
<br />
As an example, assume that we have a Wordpress blog application, in which our <code>wordpress</code> frontend connects to the [[MySQL]] database backend using a password. While creating the Deployment for <code>wordpress</code>, we can put the MySQL password in the Deployment's YAML file, but the password would not be protected. The password would be available to anyone who has access to the configuration file.<br />
<br />
In situations such as the one we just mentioned, the Secret object can help. With Secrets, we can share sensitive information like passwords, tokens, or keys in the form of key-value pairs, similar to ConfigMaps; thus, we can control how the information in a Secret is used, reducing the risk for accidental exposures. In Deployments or other system components, the Secret object is ''referenced'', without exposing its content.<br />
<br />
It is important to keep in mind that the Secret data is stored as plain text inside etcd. Administrators must limit the access to the API Server and etcd.<br />
<br />
To create a Secret using the <code>`kubectl create secret`</code> command, we need to first create a file with a password, and then pass it as an argument.<br />
<br />
* Create a file with your MySQL password:<br />
$ echo mysqlpasswd | tr -d '\n' > password.txt<br />
<br />
* Create the ''Secret'':<br />
$ kubectl create secret generic mysql-passwd --from-file=password.txt<br />
$ kubectl describe secret/mysql-passwd<br />
<pre><br />
Name: mysql-passwd<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Type: Opaque<br />
<br />
Data<br />
====<br />
password.txt: 11 bytes<br />
</pre><br />
<br />
We can also create a Secret manually, using the YAML configuration file. With Secrets, each object data must be encoded using base64. If we want to have a configuration file for our Secret, we must first get the base64 encoding for our password:<br />
<br />
$ cat password.txt | base64<br />
bXlzcWxwYXNzd2Q==<br />
<br />
and then use it in the configuration file:<br />
<pre><br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysql-passwd<br />
type: Opaque<br />
data:<br />
password: bXlzcWxwYXNzd2Q=<br />
</pre><br />
Note that base64 encoding does not do any encryption and anyone can easily decode it:<br />
<br />
$ echo "bXlzcWxwYXNzd2Q=" | base64 -d # => mysqlpasswd<br />
<br />
Therefore, make sure you do not commit a Secret's configuration file in the source code.<br />
<br />
We can get Secrets to be used by containers in a Pod by mounting them as data volumes, or by exposing them as environment variables.<br />
<br />
We can reference a Secret and assign the value of its key as an environment variable (<code>WORDPRESS_DB_PASSWORD</code>):<br />
<pre><br />
.....<br />
spec:<br />
containers:<br />
- image: wordpress:4.7.3-apache<br />
name: wordpress<br />
env:<br />
- name: WORDPRESS_DB_HOST<br />
value: wordpress-mysql<br />
- name: WORDPRESS_DB_PASSWORD<br />
valueFrom:<br />
secretKeyRef:<br />
name: my-password<br />
key: password.txt<br />
.....<br />
</pre><br />
<br />
Or, we can also mount a Secret as a Volume inside a Pod. A file would be created for each key mentioned in the Secret, whose content would be the respective value. See [https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod here] for details.<br />
<br />
==Ingress==<br />
Among the ServiceTypes mentioned earlier, NodePort and LoadBalancer are the most often used. For the LoadBalancer ServiceType, we need to have the support from the underlying infrastructure. Even after having the support, we may not want to use it for every Service, as LoadBalancer resources are limited and they can increase costs significantly. Managing the NodePort ServiceType can also be tricky at times, as we need to keep updating our proxy settings and keep track of the assigned ports. In this section, we will explore the Ingress API object, which is another method we can use to access our applications from the external world.<br />
<br />
An ''[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingress]'' is a collection of rules that allow inbound connections to reach the cluster Services. With Services, routing rules are attached to a given Service. They exist for as long as the Service exists. If we can somehow decouple the routing rules from the application, we can then update our application without worrying about its external access. This can be done using the Ingress resource. Ingress can provide load balancing, SSL/TLS termination, and name-based virtual hosting and/or routing.<br />
<br />
To allow the inbound connection to reach the cluster Services, Ingress configures a Layer 7 HTTP load balancer for Services and provides the following:<br />
<br />
* TLS (Transport Layer Security)<br />
* Name-based virtual hosting <br />
* Path-based routing<br />
* Custom rules.<br />
<br />
With Ingress, users do not connect directly to a Service. Users reach the Ingress endpoint, and, from there, the request is forwarded to the respective Service. You can see an example of an example Ingress definition below:<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: web-ingress<br />
spec:<br />
rules:<br />
- host: blue.example.com<br />
http:<br />
paths:<br />
- backend: <br />
serviceName: blue-service<br />
servicePort: 80<br />
- host: green.example.com<br />
http:<br />
paths:<br />
- backend:<br />
serviceName: green-service<br />
servicePort: 80<br />
</pre><br />
<br />
According to the example just provided, users requests to both <code>blue.example.com</code> and <code>green.example.com</code> would go to the same Ingress endpoint, and, from there, they would be forwarded to <code>blue-service</code>, and <code>green-service</code>, respectively. Here, we have seen an example of a Name-Based Virtual Hosting Ingress rule. <br />
<br />
We can also have Fan Out Ingress rules, in which we send requests like <code>example.com/blue</code> and <code>example.com/green</code>, which would be forwarded to <code>blue-service</code> and <code>green-service</code>, respectively.<br />
<br />
To secure an Ingress, you must create a ''Secret''. The TLS secret must contain keys named <code>tls.crt</code> and <code>tls.key</code>, which contain the certificate and private key to use for TLS.<br />
<br />
The Ingress resource does not do any request forwarding by itself. All of the magic is done using the ''Ingress Controller''.<br />
<br />
; Ingress Controller<br />
<br />
An Ingress Controller is an application which watches the Master Node's API Server for changes in the Ingress resources and updates the Layer 7 load balancer accordingly. Kubernetes has different Ingress Controllers, and, if needed, we can also build our own. GCE L7 Load Balancer and Nginx Ingress Controller are examples of Ingress Controllers.<br />
<br />
Minikube v0.14.0 and above ships the Nginx Ingress Controller setup as an add-on. It can be easily enabled by running the following command:<br />
<br />
$ minikube addons enable ingress<br />
<br />
Once the Ingress Controller is deployed, we can create an Ingress resource using the <code>kubectl create</code> command. For example, if we create an <code>example-ingress.yml</code> file with the content above, then, we can use the following command to create an Ingress resource:<br />
<br />
$ kubectl create -f example-ingress.yml<br />
<br />
With the Ingress resource we just created, we should now be able to access the blue-service or green-service services using blue.example.com and green.example.com URLs. As our current setup is on minikube, we will need to update the host configuration file on our workstation to the minikube's IP for those URLs:<br />
<br />
$ cat /etc/hosts<br />
127.0.0.1 localhost<br />
::1 localhost<br />
192.168.99.100 blue.example.com green.example.com <br />
<br />
Once this is done, we can now open blue.example.com and green.example.com in a browser and access the application.<br />
<br />
==Labels and Selectors==<br />
''[https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ Labels]'' are key-value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key-value labels defined. Each key must be unique for a given object.<br />
<pre><br />
"labels": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
<br />
;Syntax and character set<br />
<br />
Labels are key-value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (<code>/</code>). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (<code>.</code>), not longer than 253 characters in total, followed by a slash (<code>/</code>). If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. kube-scheduler, kube-controller-manager, kube-apiserver, kubectl, or other third-party automation) which add labels to end-user objects must specify a prefix. The <code>kubernetes.io/</code> prefix is reserved for Kubernetes core components.<br />
<br />
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between.<br />
<br />
;Label selectors<br />
<br />
Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).<br />
<br />
Via a label selector, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.<br />
<br />
The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical AND (<code>&&</code>) operator.<br />
<br />
An empty label selector (that is, one with zero requirements) selects every object in the collection.<br />
<br />
A null label selector (which is only possible for optional selector fields) selects no objects.<br />
<br />
Note: the label selectors of two controllers must not overlap within a namespace, otherwise they will fight with each other.<br />
Note that labels are not restricted to pods. You can apply them to all sorts of objects, such as nodes or services.<br />
<br />
;Examples<br />
<br />
* Label a given node:<br />
$ kubectl label node k8s.worker1.local network=gigabit<br />
<br />
* With ''Equality-based'', one may write:<br />
$ kubectl get pods -l environment=production,tier=frontend<br />
<br />
* Using ''set-based'' requirements:<br />
$ kubectl get pods -l 'environment in (production),tier in (frontend)'<br />
<br />
* Implement the OR operator on values:<br />
$ kubectl get pods -l 'environment in (production, qa)'<br />
<br />
* Restricting negative matching via exists operator:<br />
$ kubectl get pods -l 'environment,environment notin (frontend)'<br />
<br />
* Show the current labels on your pods:<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Add a label to an already running/existing pod:<br />
$ kubectl label pods busybox owner=christoph<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d owner=christoph<br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Select a pod by its label:<br />
$ kubectl get pods --selector owner=christoph<br />
#~OR~<br />
$ kubectl get pods -l owner=christoph<br />
NAME READY STATUS RESTARTS AGE<br />
busybox 1/1 Running 25 9d<br />
<br />
* Delete/remove a given label from a given pod:<br />
$ kubectl label pod busybox owner-<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
<br />
* Get all pods that belong to both the <code>production</code> ''and'' the <code>development</code> environments:<br />
$ kubectl get pods -l 'env in (production, development)'<br />
<br />
; Using Labels to select a Node on which to schedule a Pod:<br />
<br />
* Label a Node that uses SSDs as its primary HDD:<br />
$ kubectl label node k8s.worker1.local hdd=ssd<br />
<br />
<pre><br />
$ cat << EOF >busybox.yml<br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "300"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
nodeSelector: <br />
hdd: ssd<br />
EOF<br />
</pre><br />
<br />
==Annotations==<br />
With ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ Annotations]'', we can attach arbitrary, non-identifying metadata to objects, in a key-value format:<br />
<br />
<pre><br />
"annotations": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.<br />
<br />
In contrast to Labels, annotations are not used to identify and select objects. Annotations can be used to:<br />
<br />
* Store build/release IDs, which git branch, etc.<br />
* Phone numbers of persons responsible or directory entries specifying where such information can be found<br />
* Pointers to logging, monitoring, analytics, audit repositories, debugging tools, etc.<br />
* Etc.<br />
<br />
For example, while creating a Deployment, we can add a description like the one below:<br />
<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
annotations:<br />
description: Deployment based PoC dates 12 January 2018<br />
....<br />
....<br />
</pre><br />
<br />
We can look at annotations while describing an object:<br />
<br />
<pre><br />
$ kubectl describe deployment webserver<br />
Name: webserver<br />
Namespace: default<br />
CreationTimestamp: Fri, 12 Jan 2018 13:18:23 -0800<br />
Labels: app=webserver<br />
Annotations: deployment.kubernetes.io/revision=1<br />
description=Deployment based PoC dates 12 January 2018<br />
...<br />
...<br />
</pre><br />
<br />
==Jobs and CronJobs==<br />
<br />
===Jobs===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#what-is-a-job Job]'' creates one or more pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the Job itself is complete. Deleting a Job will cleanup the pods it created.<br />
<br />
A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).<br />
<br />
A Job can also be used to run multiple Pods in parallel.<br />
<br />
; Example<br />
<br />
* Below is an example ''Job'' config. It computes π to 2000 places and prints it out. It takes around 10 seconds to complete.<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
$ kubctl create -f ./job-pi.yml<br />
job "pi" created<br />
$ kubectl describe jobs/pi<br />
<pre><br />
Name: pi<br />
Namespace: default<br />
Selector: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Annotations: <none><br />
Parallelism: 1<br />
Completions: 1<br />
Start Time: Fri, 12 Jan 2018 13:25:23 -0800<br />
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Containers:<br />
pi:<br />
Image: perl<br />
Port: <none><br />
Command:<br />
perl<br />
-Mbignum=bpi<br />
-wle<br />
print bpi(2000)<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 8s job-controller Created pod: pi-rfvvw<br />
</pre><br />
<br />
* Get the result of the Job run (i.e., the value of π):<br />
$ pods=$(kubectl get pods --show-all --selector=job-name=pi --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
pi-rfvvw<br />
$ kubectl logs ${pods}<br />
3.1415926535897932384626433832795028841971693...<br />
<br />
===CronJobs===<br />
<br />
Support for creating ''Jobs'' at specified times/dates (i.e. cron) is available in Kubernetes 1.4. See [https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ here] for details.<br />
<br />
Below is an example ''CronJob''. Every minute, it runs a simple job to print current time and then echo a "hello" string:<br />
$ cat << EOF >cronjob.yml<br />
apiVersion: batch/v1beta1<br />
kind: CronJob<br />
metadata:<br />
name: hello<br />
spec:<br />
schedule: "*/1 * * * *"<br />
jobTemplate:<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: hello<br />
image: busybox<br />
args:<br />
- /bin/sh<br />
- -c<br />
- date; echo Hello from the Kubernetes cluster<br />
restartPolicy: OnFailure<br />
EOF<br />
<br />
$ kubectl create -f cronjob.yml<br />
cronjob "hello" created<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 <none> 11s<br />
<br />
$ kubectl get jobs --watch<br />
NAME DESIRED SUCCESSFUL AGE<br />
hello-1515793140 1 1 7s<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 22s 48s<br />
<br />
$ pods=$(kubectl get pods -a --selector=job-name=hello-1515793140 --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
hello-1515793140-plp8g<br />
<br />
$ kubectl logs $pods<br />
Fri Jan 12 21:39:07 UTC 2018<br />
Hello from the Kubernetes cluster<br />
<br />
* Cleanup<br />
$ kubectl delete cronjob hello<br />
<br />
==Quota Management==<br />
When there are many users sharing a given Kubernetes cluster, there is always a concern for fair usage. To address this concern, administrators can use the ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ ResourceQuota]'' object, which provides constraints that limit aggregate resource consumption per Namespace.<br />
<br />
We can have the following types of quotas per Namespace:<br />
<br />
* Compute Resource Quota: We can limit the total sum of compute resources (CPU, memory, etc.) that can be requested in a given Namespace.<br />
* Storage Resource Quota: We can limit the total sum of storage resources (PersistentVolumeClaims, requests.storage, etc.) that can be requested.<br />
* Object Count Quota: We can restrict the number of objects of a given type (pods, ConfigMaps, PersistentVolumeClaims, ReplicationControllers, Services, Secrets, etc.).<br />
<br />
==Daemon Sets==<br />
In some cases, like collecting monitoring data from all nodes, or running a storage daemon on all nodes, etc., we need a specific type of Pod running on all nodes at all times. A ''[https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ DaemonSet]'' is the object that allows us to do just that. <br />
<br />
Whenever a node is added to the cluster, a Pod from a given DaemonSet is created on it. When the node dies, the respective Pods are garbage collected. If a DaemonSet is deleted, all Pods it created are deleted as well.<br />
<br />
Example DaemonSet:<br />
<pre><br />
kind: DaemonSet<br />
apiVersion: apps/v1<br />
metadata:<br />
name: pause-ds<br />
spec:<br />
selector:<br />
matchLabels:<br />
quiet: "pod"<br />
template:<br />
metadata:<br />
labels:<br />
quiet: pod<br />
spec:<br />
tolerations:<br />
- key: node-role.kubernetes.io/master<br />
effect: NoSchedule<br />
containers:<br />
- name: pause-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Stateful Sets==<br />
The ''[https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ StatefulSet]'' controller is used for applications which require a unique identity, such as name, network identifications, strict ordering, etc. For example, MySQL cluster, etcd cluster.<br />
<br />
The StatefulSet controller provides identity and guaranteed ordering of deployment and scaling to Pods.<br />
<br />
Note: Before Kubernetes 1.5, the StatefulSet controller was referred to as ''PetSet''.<br />
<br />
==Role Based Access Control (RBAC)==<br />
''[https://kubernetes.io/docs/admin/authorization/rbac/ Role-based access control]'' (RBAC) is an authorization mechanism for managing permissions around Kubernetes resources.<br />
<br />
Using the RBAC API, we define a role which contains a set of additive permissions. Within a Namespace, a role is defined using the Role object. For a cluster-wide role, we need to use the ClusterRole object.<br />
<br />
Once the roles are defined, we can bind them to a user or a set of users using ''RoleBinding'' and ''ClusterRoleBinding''.<br />
<br />
===Using RBAC with minikube===<br />
<br />
* Start up minikube with RBAC support:<br />
$ minikube start --kubernetes-version=v1.9.0 --extra-config=apiserver.Authorization.Mode=RBAC<br />
<br />
* Setup RBAC:<br />
<pre><br />
$ cat rbac-cluster-role-binding.yml<br />
# kubectl create clusterrolebinding add-on-cluster-admin \<br />
# --clusterrole=cluster-admin --serviceaccount=kube-system:default<br />
#<br />
kind: ClusterRoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1alpha1<br />
metadata:<br />
name: kube-system-sa<br />
subjects:<br />
- kind: Group<br />
name: system:sericeaccounts:kube-system<br />
roleRef:<br />
kind: ClusterRole<br />
name: cluster-admin<br />
apiGroup: rbac.authorization.k8s.io<br />
</pre><br />
<br />
<pre><br />
$ cat rbac-setup.yml <br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: viewer<br />
namespace: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: admin<br />
namespace: rbac<br />
</pre><br />
<br />
* Create a Role Binding:<br />
<pre><br />
# kubectl create rolebinding reader-binding \<br />
# --clusterrole=reader \<br />
# --user=serviceaccount:reader \<br />
# --namespace:rbac<br />
#<br />
kind: RoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: rbac<br />
name: reader-binding<br />
roleRef:<br />
apiGroup: rbac.authorization.k8s.io<br />
kind: Role<br />
name: reader<br />
subjects:<br />
- apiGroup: rbac.authorization.k8s.io<br />
kind: ServiceAccount<br />
name: reader<br />
</pre><br />
<br />
* Create a Role:<br />
<pre><br />
$ cat rbac-role.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: default<br />
name: reader<br />
rules:<br />
- apiGroups: [""]<br />
resources: ["*"]<br />
verbs: ["get", "watch", "list"]<br />
</pre><br />
<br />
* Create an RBAC "core reader" Role with specific resources and "verbs" (i.e., the "core reader" role can "get"/"list"/etc. on specific resources (e.g., Pods, Jobs, Deployments, etc.):<br />
<pre><br />
$ cat rbac-role-core-reader.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: core-reader<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods<br />
- configmaps<br />
- secrets<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- jobs<br />
- deployments<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
</pre><br />
<br />
* "Gotchas":<br />
<pre><br />
$ cat rbac-gotcha-1.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-1<br />
rules:<br />
- nonResourceURLs:<br />
- /healthz<br />
verbs:<br />
- get<br />
- post<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- deployments<br />
verbs:<br />
- "*"<br />
</pre><br />
<pre><br />
$ cat rbac-gotcha-2.yml <br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-2<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- secrets<br />
verbs:<br />
- "*"<br />
resourceNames:<br />
- "my_secret"<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods/logs<br />
verbs:<br />
- "get"<br />
</pre><br />
<br />
; Privilege escalation<br />
* You cannot create a Role or ClusterRole that grants permissions you do not have.<br />
* You cannot create a RoleBinding or ClusterRoleBinding that binds to a Role with permissions you do not have (unless you have been explicitly given "bind" permission on the role).<br />
<br />
* Grant explicit bind access:<br />
<pre><br />
kind: ClusterRole<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: role-grantor<br />
rules:<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["rolebindings"]<br />
verbs: ["create"]<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["clusterroles"]<br />
verbs: ["bind"]<br />
resourceNames: ["admin", "edit", "view"]<br />
</pre><br />
<br />
===Testing RBAC permissions===<br />
<br />
* Example of RBAC not allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
no - Required "container.pods.create" permission.<br />
</pre><br />
<br />
* Example of RBAC allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
yes<br />
</pre><br />
<br />
* A more complex example:<br />
<pre><br />
$ kubectl auth can-i update deployments.apps \<br />
--subresource="scale" --as-group="$group" --as="$user" -n $ns<br />
</pre><br />
<br />
==Federation==<br />
With the ''[https://kubernetes.io/docs/concepts/cluster-administration/federation/ Kubernetes Cluster Federation]'' we can manage multiple Kubernetes clusters from a single control plane. We can sync resources across the clusters, and have cross cluster discovery. This allows us to do Deployments across regions and access them using a global DNS record.<br />
<br />
Federation is very useful when we want to build a hybrid solution, in which we can have one cluster running inside our private datacenter and another one on the public cloud. We can also assign weights for each cluster in the Federation, to distribute the load as per our choice.<br />
<br />
==Helm==<br />
To deploy an application, we use different Kubernetes manifests, such as Deployments, Services, Volume Claims, Ingress, etc. Sometimes, it can be tiresome to deploy them one by one. We can bundle all those manifests after templatizing them into a well-defined format, along with other metadata. Such a bundle is referred to as ''Chart''. These Charts can then be served via repositories, such as those that we have for rpm and deb packages. <br />
<br />
''[https://github.com/kubernetes/helm Helm]'' is a package manager (analogous to yum and apt) for Kubernetes, which can install/update/delete those Charts in the Kubernetes cluster.<br />
<br />
Helm has two components:<br />
<br />
* A client called helm, which runs on your user's workstation; and<br />
* A server called tiller, which runs inside your Kubernetes cluster.<br />
<br />
The client helm connects to the server tiller to manage Charts. Charts submitted for Kubernetes are available [https://github.com/kubernetes/charts here].<br />
<br />
==Monitoring and logging==<br />
In Kubernetes, we have to collect resource usage data by Pods, Services, nodes, etc, to understand the overall resource consumption and to take decisions for scaling a given application. Two popular Kubernetes monitoring solutions are Heapster and Prometheus.<br />
<br />
[https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/ Heapster] is a cluster-wide aggregator of monitoring and event data, which is natively supported on Kubernetes. <br />
<br />
[https://prometheus.io/ Prometheus], now part of [https://www.cncf.io/ CNCF] (Cloud Native Computing Foundation), can also be used to scrape the resource usage from different Kubernetes components and objects. Using its client libraries, we can also instrument the code of our application.<br />
<br />
Another important aspect for troubleshooting and debugging is Logging, in which we collect the logs from different components of a given system. In Kubernetes, we can collect logs from different cluster components, objects, nodes, etc. The most common way to collect the logs is using [https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/ Elasticsearch], which uses [https://www.fluentd.org/ fluentd] with custom configuration as an agent on the nodes. fluentd is an open source data collector, which is also part of CNCF.<br />
<br />
[https://github.com/google/cadvisor cAdvisor] is an open source container resource usage and performance analysis agent. It auto-discovers all containers on a node and collects CPU, memory, file system, and network usage statistics. It provides overall machine usage by analyzing the "root" container on the machine. It exposes a simple UI for local containers on port 4194.<br />
<br />
==Security==<br />
===Configure network policies===<br />
A ''[https://kubernetes.io/docs/concepts/services-networking/network-policies/ Network Policy]'' is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.<br />
<br />
''NetworkPolicy'' resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.<br />
<br />
* Specification of how groups of pods may communicate<br />
* Use labels to select pods and define rules<br />
* Implemented by the network plugin<br />
* Pods are non-isolated by default<br />
* Pods are isolated when a Network Policy selects them<br />
<br />
;Example NetworkPolicy<br />
Create a "default" isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods:<br />
<pre><br />
apiVersion: networking.k8s.io/v1<br />
kind: NetworkPolicy<br />
metadata:<br />
name: default-deny<br />
spec:<br />
podSelector: {}<br />
policyTypes:<br />
- Ingress<br />
</pre><br />
<br />
===TLS certificates for cluster components===<br />
Get [https://github.com/OpenVPN/easy-rsa easy-rsa].<br />
<br />
$ ./easyrsa init-pki<br />
$ MASTER_IP=10.100.1.2<br />
$ ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass<br />
<br />
$ cat rsa-request.sh<br />
<pre><br />
#!/bin/bash<br />
./easyrsa --subject-alt-name="IP:${MASTER_IP}," \<br />
"DNS:kubernetes," \<br />
"DNS:kubernetes.default," \<br />
"DNS:kubernetes.default.svc," \<br />
"DNS:kubernetes.default.svc.cluster," \<br />
"DNS:kubernetes.default.svc.cluster.local" \<br />
--days=10000 \<br />
build-server-full server nopass<br />
</pre><br />
<br />
<pre><br />
pki/<br />
├── ca.crt<br />
├── certs_by_serial<br />
│ └── F3A6F7D34BC84330E7375FA20C8441DF.pem<br />
├── index.txt<br />
├── index.txt.attr<br />
├── index.txt.old<br />
├── issued<br />
│ └── server.crt<br />
├── private<br />
│ ├── ca.key<br />
│ └── server.key<br />
├── reqs<br />
│ └── server.req<br />
├── serial<br />
└── serial.old<br />
</pre><br />
<br />
* Figure out what are the paths of the old TLS certs/keys with the following command:<br />
<pre><br />
$ ps aux | grep [a]piserver | sed -n -e 's/^.*\(kube-apiserver \)/\1/p' | tr ' ' '\n'<br />
kube-apiserver<br />
--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota<br />
--requestheader-extra-headers-prefix=X-Remote-Extra-<br />
--advertise-address=172.31.118.138<br />
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt<br />
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt<br />
--requestheader-username-headers=X-Remote-User<br />
--service-cluster-ip-range=10.96.0.0/12<br />
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key<br />
--secure-port=6443<br />
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key<br />
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname<br />
--requestheader-group-headers=X-Remote-Group<br />
--requestheader-allowed-names=front-proxy-client<br />
--service-account-key-file=/etc/kubernetes/pki/sa.pub<br />
--insecure-port=0<br />
--enable-bootstrap-token-auth=true<br />
--allow-privileged=true<br />
--client-ca-file=/etc/kubernetes/pki/ca.crt<br />
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt<br />
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key<br />
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt<br />
--authorization-mode=Node,RBAC<br />
--etcd-servers=http://127.0.0.1:2379<br />
</pre><br />
<br />
===Security Contexts===<br />
A ''[https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Security Context]'' defines privilege and access control settings for a Pod or Container. Security context settings include:<br />
<br />
* Discretionary Access Control: Permission to access an object, like a file, is based on user ID (UID) and group ID (GID).<br />
* Security Enhanced Linux (SELinux): Objects are assigned security labels.<br />
* Running as privileged or unprivileged.<br />
* Linux Capabilities: Give a process some privileges, but not all the privileges of the root user.<br />
* AppArmor: Use program profiles to restrict the capabilities of individual programs.<br />
* Seccomp: Limit a process's access to open file descriptors.<br />
* AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This boolean directly controls whether the <code>no_new_privs</code> flag gets set on the container process. <code>AllowPrivilegeEscalation</code> is true always when the container is: 1) run as Privileged; or 2) has <code>CAP_SYS_ADMIN</code>.<br />
<br />
; Example #1<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: security-context-demo<br />
spec:<br />
securityContext:<br />
runAsUser: 1000<br />
fsGroup: 2000<br />
volumes:<br />
- name: sec-ctx-vol<br />
emptyDir: {}<br />
containers:<br />
- name: sec-ctx-demo<br />
image: gcr.io/google-samples/node-hello:1.0<br />
volumeMounts:<br />
- name: sec-ctx-vol<br />
mountPath: /data/demo<br />
securityContext:<br />
allowPrivilegeEscalation: false<br />
</pre><br />
<br />
==Taints and tolerations==<br />
[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature Node affinity] is a property of pods that ''attracts'' them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite – they allow a node to ''repel'' a set of pods.<br />
<br />
[https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ Taints and tolerations] work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks the node such that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.<br />
<br />
==Remove a node from a cluster==<br />
<br />
* On the k8s Master Node:<br />
k8s-master> $ kubectl drain k8s-worker-02 --ignore-daemonsets<br />
<br />
* On the k8s Worker Node (the one you wish to remove from the cluster):<br />
k8s-worker-02> $ kubeadm reset<br />
[preflight] Running pre-flight checks.<br />
[reset] Stopping the kubelet service.<br />
[reset] Unmounting mounted directories in "/var/lib/kubelet"<br />
[reset] Removing kubernetes-managed containers.<br />
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.<br />
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]<br />
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]<br />
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]<br />
<br />
==Networking==<br />
<br />
; Useful network ranges<br />
* Choose ranges for the Pods and Service CIDR blocks<br />
* Generally, any of the RFC-1918 ranges work well<br />
** 10.0.0.0/8<br />
** 172.0.0.0/11<br />
** 192.168.0.0/16<br />
<br />
Every Pod can communicate directly with every other Pod<br />
<br />
;K8s Node<br />
* A general purpose compute that has at least one interface<br />
** The host OS will have a real-world IP for accessing the machine<br />
** K8s Pods are given ''virtual'' interfaces connected to an internal<br />
** Each nodes has a running network stack<br />
* Kube-proxy runs in the OS to control IPtables for:<br />
** Services<br />
** NodePorts<br />
<br />
;Networking substrate<br />
* Most k8s network stacks allocate subnets for each node<br />
** The network stack is responsible for arbitration of subnets and IPs<br />
** The network stack is also responsible for moving packets around the network<br />
* Pods have a unique, routable IP on the Pod CIDR block<br />
** The CIDR block is ''not'' accessed from outside the k8s cluster<br />
** The magic of IPtables allows the Pods to make outgoing connections<br />
* Ensure that k8s has the correct Pods and Service CIDR blocks<br />
<br />
The Pod network is not seen on the physical network (i.e., it is encapsulated; you will not be able to use <code>tcpdump</code> on it from the physical network)<br />
<br />
;Making the setup easier &mdash; CNI<br />
* Use the Container Network Interface (CNI)<br />
* Relieves k8s from having to have a specific network configuration<br />
* It is activated by supplying <code>--network-plugin=cni, --cni-conf-dir, --cni-bin-dir</code> to kubelet<br />
** Typical configuration directory: <code>/etc/cni/net.d</code><br />
** Typical bin directory: <code>/opt/cni/bin</code><br />
* Allows for multiple backends to be used: linux-bridge, macvlan, ipvlan, Open vSwitch, network stacks<br />
<br />
;Kubernetes services<br />
<br />
* Services are crucial for service discovery and distributing traffic to Pods<br />
* Services act as simple internal load balancers with VIPs<br />
** No access controls<br />
** No traffic controls<br />
* IPtables magically route to virtual IPs<br />
* Internally, Services are used as inter-Pod service discovery<br />
** Kube-DNS publishes DNS record (i.e., <code>nginx.default.svc.cluster.local</code>)<br />
* Services can be exposed in three different ways:<br />
*# ClusterIP<br />
*# LoadBalancer<br />
*# NodePort<br />
<br />
; kube-proxy<br />
* Each k8s node in the cluster runs a kube-proxy<br />
* Two modes: userspace and iptables<br />
** iptables is much more performant (userspace should no longer be used<br />
* kube-proxy has the task of configuring iptables to expose each k8s service<br />
** iptables rules distributes traffic randomly across the endpoints<br />
<br />
===Network providers===<br />
<br />
In order for a CNI plugin to be considered a "[https://kubernetes.io/docs/concepts/cluster-administration/networking/ Network Provider]", it must provide (at the very least) the following:<br />
# All containers can communicate with all other containers without NAT<br />
# All nodes can communicate with all containers (and ''vice versa'') without NAT<br />
# The IP that a containers sees itself as is the same IP that others see it as<br />
<br />
==Linux namespaces==<br />
<br />
* Control group (cgroups)<br />
* Union File Systems<br />
<br />
==Kubernetes inbound node port requirements==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Protocol<br />
!Direction<br />
!Port range<br />
!Purpose<br />
!Used by<br />
!Notes<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Master node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 6443<sup>*</sup> || Kubernetes API server || All<br />
|-<br />
| TCP || Inbound || 2379-2380 || etcd server client API || kube-apiserver, etcd<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10251 || kube-scheduler || Self<br />
|-<br />
| TCP || Inbound || 10252 || kube-controller-manager || Self<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Worker node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 30000-32767 || NodePort Services<sup>**</sup> || All<br />
|}<br />
</div><br />
<br clear="all"/><br />
<sup>**</sup> Default port range for NodePort Services.<br />
<br />
Any port numbers marked with <sup>*</sup> are overridable, so you will need to ensure any custom ports you provide are also open.<br />
<br />
Although etcd ports are included in master nodes, you can also host your own etcd cluster externally or on custom ports.<br />
<br />
The pod network plugin you use (see below) may also require certain ports to be open. Since this differs with each pod network plugin, please see the documentation for the plugins about what port(s) those need.<br />
<br />
==API versions==<br />
<br />
Below is a table showing which value to use for the <code>apiVersion</code> key for a given k8s primitive (note: all values are for k8s 1.8.0, unless otherwise specified):<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Primitive<br />
!apiVersion<br />
|-<br />
| Pod || v1<br />
|-<br />
| Deployment || apps/v1beta2<br />
|-<br />
| Service || v1<br />
|-<br />
| Job || batch/v1<br />
|-<br />
| Ingress || extensions/v1beta1<br />
|-<br />
| CronJob || batch/v1beta1<br />
|-<br />
| ConfigMap || v1<br />
|-<br />
| DaemonSet || apps/v1<br />
|-<br />
| ReplicaSet || apps/v1beta2<br />
|-<br />
| NetworkPolicy || networking.k8s.io/v1<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
You can get a list of all of the API versions supported by your k8s install with:<br />
$ kubectl api-versions<br />
<br />
==Troubleshooting==<br />
<br />
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns<br />
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
* If your container has previously crashed, you can access the previous container’s crash log with:<br />
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}<br />
<br />
==Miscellaneous commands==<br />
<br />
* Simple workflow (not a best practice; use manifest files {YAML} instead):<br />
$ kubectl run nginx --image=nginx:1.10.0<br />
$ kubectl expose deployment nginx --port 80 --type LoadBalancer<br />
$ kubectl get services # <- wait until public IP is assigned<br />
$ kubectl scale deployment nginx --replicas 3<br />
<br />
* Create an Nginx deployment with three replicas without using YAML:<br />
$ kubectl run nginx --image=nginx --replicas=3<br />
<br />
* Take a node out of service for maintenance:<br />
$ kubectl cordon k8s.worker1.local<br />
$ kubectl drain k8s.worker1.local --ignore-daemonsets<br />
<br />
* Return a given node to a service after cordoning and "draining" it (e.g., after a maintenance):<br />
$ kubectl uncordon k8s.worker1.local<br />
<br />
* Get a list of nodes in a format useful for scripting:<br />
$ kubectl get nodes -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get nodes -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get nodes -o json | jq -crM '.items[].metadata.name'<br />
#~OR~ (if using an older version of `jq`)<br />
$ kubectl get nodes -o json | jq '.items[].metadata.name' | tr -d '"'<br />
<br />
* Label a list of nodes:<br />
<pre><br />
for node in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do<br />
kubectl label nodes ${node} instancetype=ondemand;<br />
kubectl label nodes ${node} "example.io/node-lifecycle"=od;<br />
done<br />
</pre><br />
<br />
* Delete a bunch of Pods in "Evicted" state:<br />
$ kubectl get pod -n develop | awk '/Evicted/{print $1}' | xargs kubectl delete pod -n develop<br />
#~OR~<br />
$ kubectl get po -a --all-namespaces -o json | \<br />
jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | <br />
"kubectl delete po \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c<br />
<br />
* Get a random node:<br />
$ NODES=($(kubectl get nodes -o json | jq -crM '.items[].metadata.name'))<br />
$ NUMNODES=${#NODES[@]}<br />
$ echo ${NODES[$[ $RANDOM % $NUMNODES ]]}<br />
<br />
* Get all recent events sorted by their timestamps:<br />
$ kubectl get events --sort-by='.metadata.creationTimestamp'<br />
<br />
* Get a list of all Pods in the default namespace sorted by Node:<br />
$ kubectl get po -o wide --sort-by=.spec.nodeName<br />
<br />
* Get the cluster IP for a service named "foo":<br />
$ kubectl get svc/foo -o jsonpath='{.spec.clusterIP}'<br />
<br />
* List all Services in a cluster and their node ports:<br />
$ kubectl get --all-namespaces svc -o json |\<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Print just the Pod names of those Pods with the label <code>app=nginx</code>:<br />
$ kubectl get --no-headers=true pods -l app=nginx -o custom-columns=:metadata.name<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get --no-headers=true pods -l app=nginx -o name | awk -F "/" '{print $2}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o json | jq -crM '.items [] | .metadata.name'<br />
<br />
* Get a list of all container images used by the Pods in your default namespace:<br />
$ kubectl get pods -o go-template --template='<nowiki>{{range .items}}{{racontainers}}{{.image}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get pods -o go-template="<nowiki>{{range .items}}{{range .spec.containers}}{{.image}}|{{end}}{{end}}</nowiki>" | tr '|' '\n'<br />
<br />
* Get a list of Pods sorted by Node name:<br />
$ kubectl get po -o json | jq -r '.items | sort_by(.spec.nodeName)[] | [.spec.nodeName,.metadata.name] | @tsv'<br />
<br />
* List all Services in a cluster with their endpoints:<br />
$ kubectl get --all-namespaces svc -o json | \<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Get status transitions of each Pod in the default namespace:<br />
$ export tpl='{range .items[*]}{"\n"}{@.metadata.name}{range @.status.conditions[*]}{"\t"}{@.type}={@.status}{end}{end}'<br />
$ kubectl get po -o jsonpath="${tpl}" && echo<br />
<br />
cheddar-cheese-d6d6587c7-4bgcz Initialized=True Ready=True PodScheduled=True<br />
echoserver-55f97d5bff-pdv65 Initialized=True Ready=True PodScheduled=True<br />
stilton-cheese-6d64cbc79-g7h4w Initialized=True Ready=True PodScheduled=True<br />
<br />
* Get a list of all Pods in status "Failed":<br />
$ kubectl get pods -o go-template='<nowiki>{{range .items}}{{if eq .status.phase "Failed"}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get all users in all namespaces:<br />
$ kubectl get rolebindings --all-namepsaces -o go-template \<br />
--template='<nowiki>{{range .items}}{{println}}{{.metadata.namespace}}={{range .subjects}}{{if eq .kind "User"}}{{.name}} {{end}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get the memory limit assigned to a container in a given Pod:<br />
<pre><br />
$ kubectl get pod example-pod-name -n default \<br />
-o jsonpath="{.spec.containers[*].resources.limits}" <br />
</pre><br />
<br />
* Get a Bash prompt of your current context and namespace:<br />
<pre><br />
NORMAL="\[\033[00m\]"<br />
BLUE="\[\033[01;34m\]"<br />
RED="\[\e[1;31m\]"<br />
YELLOW="\[\e[1;33m\]"<br />
GREEN="\[\e[1;32m\]"<br />
PS1_WORKDIR="\w"<br />
PS1_HOSTNAME="\h"<br />
PS1_USER="\u"<br />
<br />
__kube_ps1()<br />
{<br />
CONTEXT=$(kubectl config current-context)<br />
NAMESPACE=$(kubectl config view -o jsonpath="{.contexts[?(@.name==\"${CONTEXT}\")].context.namespace}")<br />
if [ -z "$NAMESPACE"]; then<br />
NAMESPACE="default"<br />
fi<br />
if [ -n "$CONTEXT" ]; then<br />
case "$CONTEXT" in<br />
*prod*)<br />
echo "${RED}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*test*)<br />
echo "${YELLOW}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*)<br />
echo "${GREEN}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
esac<br />
fi<br />
}<br />
<br />
export PROMPT_COMMAND='PS1="${GREEN}${PS1_USER}@${PS1_HOSTNAME}${NORMAL}:$(__kube_ps1)${BLUE}${PS1_WORKDIR}${NORMAL}\$ "'<br />
</pre><br />
<br />
===Client configuration===<br />
<br />
* Setup autocomplete in bash; bash-completion package should be installed first:<br />
$ source <(kubectl completion bash)<br />
<br />
* View Kubernetes config:<br />
$ kubectl config view<br />
<br />
* View specific config items by JSON path:<br />
$ kubectl config view -o jsonpath='{.users[?(@.name == "k8s")].user.password}'<br />
<br />
* Set credentials for foo.kuberntes.com:<br />
$ kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword<br />
<br />
===Viewing / finding resources===<br />
<br />
* List all services in the namespace:<br />
$ kubectl get services<br />
<br />
* List all pods in all namespaces in wide format:<br />
$ kubectl get pods -o wide --all-namespaces<br />
<br />
* List all pods in JSON (or YAML) format:<br />
$ kubectl get pods -o json<br />
<br />
* Describe resource details (node, pod, svc):<br />
$ kubectl describe nodes my-node<br />
<br />
* List services sorted by name:<br />
$ kubectl get services --sort-by=.metadata.name<br />
<br />
* List pods sorted by restart count:<br />
$ kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'<br />
<br />
* Rolling update pods for frontend-v1:<br />
$ kubectl rolling-update frontend-v1 -f frontend-v2.json<br />
<br />
* Scale a ReplicaSet named "foo" to 3:<br />
$ kubectl scale --replicas=3 rs/foo<br />
<br />
* Scale a resource specified in "foo.yaml" to 3:<br />
$ kubectl scale --replicas=3 -f foo.yaml<br />
<br />
* Execute a command in every pod / replica:<br />
$ for i in 0 1; do kubectl exec foo-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done<br />
<br />
* Get a list of ''all'' container IDs running in ''all'' Pods in ''all'' namespaces for a given Kubernetes cluster:<br />
<pre><br />
$ kubectl get pods --all-namespaces \<br />
-o jsonpath='{range .items[*]}{"pod: "}{.metadata.name}{"\n"}{range .status.containerStatuses[*]}{"\tname: "}{.containerID}{"\n\timage: "}{.image}{"\n"}{end}'<br />
<br />
# Example output:<br />
pod: cert-manager-848f547974-8m2k6<br />
name: containerd://358415173310a528a36ca2c19cdc3319f8fd96634c09957977767333b104d387<br />
image: quay.io/jetstack/cert-manager-controller:v1.5.3<br />
</pre><br />
<br />
===Manage resources===<br />
<br />
* Get documentation for pod or service:<br />
$ kubectl explain pods,svc<br />
<br />
* Create resource(s) like pods, services or DaemonSets:<br />
$ kubectl create -f ./my-manifest.yaml<br />
<br />
* Apply a configuration to a resource:<br />
$ kubectl apply -f ./my-manifest.yaml<br />
<br />
* Start a single instance of Nginx:<br />
$ kubectl run nginx --image=nginx<br />
<br />
* Create a secret with several keys:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysecret<br />
type: Opaque<br />
data:<br />
password: $(echo "s33msi4" | base64)<br />
username: $(echo "jane"| base64)<br />
EOF<br />
</pre><br />
<br />
* Delete a resource:<br />
$ kubectl delete -f ./my-manifest.yaml<br />
<br />
===Monitoring and logging===<br />
<br />
* Deploy Heapster from Github repository:<br />
$ kubectl create -f deploy/kube-config/standalone/<br />
<br />
* Show metrics for nodes:<br />
$ kubectl top node<br />
<br />
* Show metrics for pods:<br />
$ kubectl top pod<br />
<br />
* Show metrics for a given pod and its containers:<br />
$ kubectl top pod pod_name --containers<br />
<br />
* Dump pod logs (STDOUT):<br />
$ kubectl logs pod_name<br />
<br />
* Stream pod container logs (STDOUT, multi-container case):<br />
$ kubectl logs -f pod_name -c my-container<br />
<br />
<!-- TODO: https://gist.github.com/so0k/42313dbb3b547a0f51a547bb968696ba --><br />
<br />
===Run tcpdump on containers running in Pods===<br />
<br />
* Find which node/host/IP the Pod in question is running on and also get the container ID:<br />
<pre><br />
$ kubectl describe pod busybox | grep -E "^Node:|Container ID: "<br />
Node: worker2/10.39.32.122<br />
Container ID: docker://a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03<br />
<br />
#~OR~<br />
<br />
$ containerID=$(kubectl get po busybox -o jsonpath='{.status.containerStatuses[*].containerID}' | sed -e 's|docker://||g')<br />
$ hostIP=$(kubectl get po busybox -o jsonpath='{.status.hostIP}')<br />
</pre><br />
<br />
Log into the node/host running the Pod in question and then perform the following steps.<br />
<br />
* Get the virtual interface ID (note it will depend on which Container Network Interface you are using {e.g., veth, cali, etc.}):<br />
<pre><br />
$ docker exec a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03 /bin/sh -c 'cat /sys/class/net/eth0/iflink'<br />
12<br />
<br />
# List all non-virtual interfaces:<br />
$ for iface in $(find /sys/class/net/ -type l ! -lname '*/devices/virtual/net/*' -printf '%f '); do echo "$iface is not virtual"; done<br />
ens192 is not virtual<br />
<br />
# Check if we are using veth or cali or something else:<br />
$ ls -1 /sys/class/net/ | awk '!/docker|lo|ens/{print substr($0,0,4);exit}'<br />
cali<br />
<br />
$ for i in /sys/class/net/veth*/ifindex; do grep -l 12 $i; done<br />
#~OR~<br />
$ for i in /sys/class/net/cali*/ifindex; do grep -l 12 $i; done<br />
/sys/class/net/cali12d4a061371/ifindex<br />
#~OR~<br />
echo $(find /sys/class/net/ -type l -lname '*/devices/virtual/net/*' -exec grep -l 12 {}/ifindex \;) | awk -F'/' '{print $5}'<br />
cali12d4a061371<br />
#~OR~<br />
$ ip link | grep ^12<br />
12: cali12d4a061371@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default<br />
#~OR~<br />
$ ip link | awk '/^12/{print $2}' | awk -F'@' '{print $1}'<br />
cali12d4a061371<br />
</pre><br />
<br />
* Now run [[tcpdump]] on this virtual interface (note: make sure you are running tcpdump on the ''same'' host as the Pod is running on):<br />
$ sudo tcpdump -i cali12d4a061371<br />
<br />
; Self-signed certificates<br />
<br />
If you are using the latest version of <code>kubectl</code> and are running it against a k8s cluster built with a self-signed cert, you can get around any "x509" errors with:<br />
$ export GODEBUG=x509ignoreCN=0<br />
<br />
===API resources===<br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ time for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
<br />
real 1m20.014s<br />
user 0m52.732s<br />
sys 0m17.751s<br />
</pre><br />
<br />
* Note: if you just want a version for a single/given kind:<br />
<pre><br />
$ kubectl explain deploy | head -2<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
</pre><br />
<br />
===kubectl-neat===<br />
<br />
: See: https://github.com/itaysk/kubectl-neat<br />
: See: [[jq]]<br />
<br />
* To easily copy a certificate secret from one namespace to another namespace run:<br />
<pre><br />
$ SOURCE_NAMESPACE=<update-me><br />
$ DESTINATION_NAMESPACE=<update-me><br />
$ kubectl -n ${SOURCE_NAMESPACE} get secret kafka-client-credentials -o json |\<br />
kubectl neat |\<br />
jq 'del(.metadata["namespace"])' |\<br />
kubectl apply -n ${DESTINATION_NAMESPACE} -f -<br />
</pre><br />
<br />
===Get CPU/memory for each node===<br />
<br />
<pre><br />
for node in $(kubectl get nodes -o=jsonpath='{.items[*].metadata.name}'); do<br />
echo "NODE: ${node}"; kubectl describe node ${node} | grep -E '^ cpu |^ memory ';<br />
done<br />
</pre><br />
<br />
===Get vCPU capacity===<br />
<br />
<pre><br />
$ kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"} \<br />
{.status.capacity.cpu}{\"\n\"}{end}"<br />
</pre><br />
<br />
==Miscellaneous examples==<br />
<br />
* Create a Namespace:<br />
<pre><br />
kind: Namespace<br />
apiVersion: v1<br />
metadata:<br />
name: my-namespace<br />
</pre><br />
<br />
; Testing the load balancing capabilities of a Service<br />
<br />
* Create a Deployment with two replicas of Nginx (i.e., 2 x Pods with identical containers, configuration, etc.):<br />
<pre><br />
$ cat << EOF >nginx-deploy.yml<br />
kind: Deployment<br />
apiVersion: apps/v1<br />
metadata:<br />
name: nginx-deploy<br />
spec:<br />
replicas: 2<br />
strategy:<br />
rollingUpdate:<br />
maxSurge: 1<br />
maxUnavailable: 0<br />
type: RollingUpdate<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
$ kubectl create --validate -f nginx-deploy.yml<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deploy 2 2 2 2 1h<br />
$ kubectl get po<br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deploy-8d68fb6cc-bspt8 1/1 Running 1 1h<br />
nginx-deploy-8d68fb6cc-qdvhg 1/1 Running 1 1h<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: nginx-svc<br />
spec:<br />
ports:<br />
- port: 8080<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
<br />
$ kubectl get svc/nginx-svc<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
nginx-svc ClusterIP 10.101.133.100 <none> 8080/TCP 1h<br />
</pre><br />
<br />
* Overwrite the default index.html file (note: This is ''not'' persistent. The original default index.html file will be restored if the Pod fails and the Deployment brings up a new Pod and/or if you modify your Deployment {e.g., upgrade Nginx}. This is just for demonstration purposes):<br />
$ kubectl exec -it nginx-8d68fb6cc-bspt8 -- sh -c 'echo "pod-01" > /usr/share/nginx/html/index.html'<br />
$ kubectl exec -it nginx-8d68fb6cc-qdvhg -- sh -c 'echo "pod-02" > /usr/share/nginx/html/index.html'<br />
<br />
* Get the HTTP status code and server value from the header of a request to the Service endpoint:<br />
$ curl -Is 10.101.133.100:8080 | grep -E '^HTTP|Server'<br />
HTTP/1.1 200 OK<br />
Server: nginx/1.7.9 # <- This is the version of Nginx we defined in the Deployment above<br />
<br />
* Perform a GET request on the Service endpoint (ClusterIP+Port):<br />
<pre><br />
$ for i in $(seq 1 10); do curl -s 10.101.133.100:8080; done<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-02<br />
</pre><br />
Sometimes <code>pod-01</code> responded; sometimes <code>pod-02</code> responded.<br />
<br />
* Perform a GET on the Service endpoint 10,000 times and sum up which Pod responded for each request:<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
5018 pod-01 # <- number of times pod-01 responded to the request<br />
4982 pod-02 # <- number of times pod-02 responded to the request<br />
<br />
real 1m0.639s<br />
user 0m29.808s<br />
sys 0m11.692s<br />
</pre><br />
<br />
$ awk 'BEGIN{print 5018/(5018+4982);}'<br />
0.5018<br />
$ awk 'BEGIN{print 4982/(5018+4982);}'<br />
0.4982<br />
<br />
So, our Service is "load balancing" our two Nginx Pods in a roughly 50/50 fashion.<br />
<br />
In order to double-check that the Service is randomly selecting a Pod to serve the GET request, let's scale our Deployment from 2 to 3 replicas:<br />
$ kubectl scale deploy/nginx-deploy --replicas=3<br />
<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
3392 pod-01<br />
3335 pod-02<br />
3273 pod-03<br />
<br />
real 0m59.537s<br />
user 0m25.932s<br />
sys 0m9.656s<br />
</pre><br />
$ awk 'BEGIN{print 3392/(3392+3335+3273);}'<br />
0.3392<br />
$ awk 'BEGIN{print 3335/(3392+3335+3273);}'<br />
0.3335<br />
$ awk 'BEGIN{print 3273/(3392+3335+3273);}'<br />
0.3273<br />
<br />
Sure enough. Each of the 3 Pods is serving the GET request roughly 33% of the time.<br />
<br />
==Example YAML files==<br />
<br />
* Basic Pod using busybox:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod using busybox, which also prints out environment variables (including the ones defined in the YAML):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: env-dump<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- env<br />
env:<br />
- name: USERNAME<br />
value: "Christoph"<br />
- name: PASSWORD<br />
value: "mypassword"<br />
</pre><br />
$ kubectl logs env-dump<br />
...<br />
PASSWORD=mypassword<br />
USERNAME=Christoph<br />
...<br />
<br />
* Basic Pod using alpine:<br />
<pre><br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: alpine<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: alpine<br />
image: alpine<br />
command:<br />
- /bin/sh<br />
- "-c"<br />
- "sleep 60m"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod running Nginx:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx-pod<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create a Job that calculates pi up to 2000 decimal places:<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
<br />
* Create a Deployment with two replicas of Nginx running:<br />
<pre><br />
apiVersion: apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
spec:<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
replicas: 2 <br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.9.1<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Create a basic Persistent Volume, which uses NFS:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 172.31.119.58<br />
readOnly: false<br />
</pre><br />
<br />
* Create a Persistent Volume Claim against the above PV:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
</pre><br />
<br />
* Create a Pod using a customer scheduler (i.e., not the default one):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: my-custom-scheduler<br />
annotations:<br />
scheduledBy: custom-scheduler<br />
spec:<br />
schedulerName: custom-scheduler<br />
containers:<br />
- name: pod-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Install k8s cluster manually in the Cloud==<br />
<br />
''Note: For this example, I will be using AWS and I will assume you already have 3 x EC2 instances running CentOS 7 in your AWS account. I will install Kubernetes 1.10.x.''<br />
<br />
* Disable services not supported (yet) by Kubernetes:<br />
$ sudo setenforce 0 # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
<br />
$ sudo systemctl stop firewalld<br />
$ sudo systemctl mask firewalld<br />
$ sudo yum install -y iptables-services<br />
<br />
* Disable swap:<br />
$ sudo swapoff -a # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo vi /etc/fstab # comment out swap line<br />
$ sudo mount -a<br />
<br />
* Make sure routed traffic does not bypass iptables:<br />
$ cat << EOF > /etc/sysctl.d/k8s.conf<br />
net.bridge.bridge-nf-call-ip6tables = 1<br />
net.bridge.bridge-nf-call-iptables = 1<br />
EOF<br />
$ sudo sysctl --system<br />
<br />
* Install <code>kubelet</code>, <code>kubeadm</code>, and <code>kubectl</code> on '''''all''''' nodes in your cluster (both Master and Worker nodes):<br />
<pre><br />
$ cat << EOF > /etc/yum.repos.d/kubernetes.repo<br />
[kubernetes]<br />
name=Kubernetes<br />
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch<br />
enabled=1<br />
gpgcheck=1<br />
repo_gpgcheck=1<br />
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg<br />
EOF<br />
</pre><br />
<br />
$ sudo yum install -y kubelet kubeadm kubectl<br />
$ sudo systemctl enable kubelet && sudo systemctl start kubelet<br />
<br />
* Configure cgroup driver used by kubelet on '''''all''''' nodes (both Master and Worker nodes):<br />
<br />
Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config:<br />
<br />
$ docker info | grep -i cgroup<br />
$ grep -i cgroup /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
If the Docker cgroup driver and the kubelet config do not match, change the kubelet config to match the Docker cgroup driver. The flag you need to change is <code>--cgroup-driver</code>. If it is already set, you can update like so:<br />
<br />
$ sudo sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
Otherwise, you will need to open the systemd file and add the flag to an existing environment line.<br />
<br />
Then restart kubelet:<br />
<br />
$ sudo systemctl daemon-reload<br />
$ sudo systemctl restart kubelet<br />
<br />
* Run <code>kubeadm</code> on Master node:<br />
<br />
K8s requires a pod network to function. We are going to use Flannel, so we need to pass in a flag to the deployment script so k8s knows how to configure itself:<br />
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16<br />
<br />
Note: This command might take a fair amount of time to complete.<br />
<br />
Once it has completed, make note of the "<code>join</code>" command output by <code>kubeadm init</code> that looks something like the following ('''DO NOT RUN THE FOLLOWING COMMAND YET!'''):<br />
# kubeadm join --token --discovery-token-ca-cert-hash sha256:<br />
<br />
You will run that command on the other non-master nodes (aka the "Worker Nodes") to allow them to join the cluster. However, '''do not''' run that command on the worker nodes until you have completed all of the following steps.<br />
<br />
* Create a directory:<br />
$ mkdir -p $HOME/.kube<br />
<br />
* Copy the configuration files to a location usable by the local user:<br />
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config <br />
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config<br />
<br />
* In order for your pods to communicate with one another, you will need to install pod networking. We are going to use Flannel for our Container Network Interface (CNI) because it is easy to install and reliable. <br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</nowiki><br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml</nowiki><br />
<br />
* Make sure everything is coming up properly:<br />
$ kubectl get pods --all-namespaces --watch<br />
Once the <code>kube-dns-xxxx</code> containers are up (i.e., in Status "Running"), your cluster is ready to accept worker nodes.<br />
<br />
* On each of the Worker nodes, run the <code>sudo kubeadm join ...</code> command that <code>kubeadm init</code> created for you (see above).<br />
<br />
* On the Master Node, run the following command:<br />
$ kubectl get nodes --watch<br />
Once the Status of the Worker Nodes returns "Ready", your k8s cluster is ready to use.<br />
<br />
* Example output of successful Kubernetes cluster:<br />
<pre><br />
$ kubectl get nodes<br />
NAME STATUS ROLES AGE VERSION<br />
k8s-01 Ready master 13m v1.10.1<br />
k8s-02 Ready <none> 12m v1.10.1<br />
k8s-03 Ready <none> 12m v1.10.1<br />
</pre><br />
<br />
That's it! You are now ready to start deploying Pods, Deployments, Services, etc. in your Kubernetes cluster!<br />
<br />
==Bash completion==<br />
''Note: The following only works on newer versions. I have tested that this works on version 1.9.1.''<br />
<br />
Add the following line to your <code>~/.bashrc</code> file:<br />
source <(kubectl completion bash)<br />
<br />
==Kubectl plugins==<br />
<br />
SEE: [https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/ Extend kubectl with plugins] for details.<br />
<br />
: FEATURE STATE: Kubernetes v1.11 (alpha)<br />
: FEATURE STATE: Kubernetes v1.15 (stable)<br />
<br />
This section shows you how to install and write extensions for <code>kubectl</code>. Usually called "plugins" or "binary extensions", this feature allows you to extend the default set of commands available in <code>kubectl</code> by adding new sub-commands to perform new tasks and extend the set of features available in the main distribution of <code>kubectl</code>.<br />
<br />
Get code [https://github.com/kubernetes/kubernetes/tree/master/pkg/kubectl/plugins/examples from here].<br />
<br />
<pre><br />
.kube/<br />
└── plugins<br />
└── aging<br />
├── aging.rb<br />
└── plugin.yaml<br />
</pre><br />
<br />
$ chmod 0700 .kube/plugins/aging/aging.rb<br />
<br />
* See options:<br />
<pre><br />
$ kubectl plugin aging --help<br />
Aging shows pods from the current namespace by age.<br />
<br />
Usage:<br />
kubectl plugin aging [flags] [options]<br />
</pre><br />
<br />
* Usage:<br />
<pre><br />
$ kubectl plugin aging<br />
The Magnificent Aging Plugin.<br />
<br />
nginx-deployment-67594d6bf6-5t8m9: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-6kw9j: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-d8dwt: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
</pre><br />
<br />
==Local Kubernetes==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="6" bgcolor="#EFEFEF" | '''Local Kubernetes Comparisons'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Feature<br />
!kind<br />
!k3d<br />
!minikube<br />
!Docker Desktop<br />
!Rancher Desktop<br />
|- <br />
| Free || yes || yes || yes || Personal Small Business* || yes<br />
|--bgcolor="#eeeeee"<br />
| Install || easy || easy || easy || easy || medium (you may encounter odd scenarios)<br />
|-<br />
| Ease of Use || medium || medium || medium || easy || easy<br />
|--bgcolor="#eeeeee"<br />
| Stability || stable || stable || stable || stable || stable<br />
|-<br />
| Cross-platform || yes || yes || yes || yes || yes<br />
|--bgcolor="#eeeeee"<br />
| CI Usage || yes || yes || yes || no || no<br />
|-<br />
| Multiple clusters || yes || yes || yes || no || no<br />
|--bgcolor="#eeeeee"<br />
| Podman support || yes || yes || yes || no || no<br />
|-<br />
| Host volumes mount support || yes || yes || yes (with some performance limitations) || yes || yes (only pre-defined paths)<br />
|--bgcolor="#eeeeee"<br />
| Kubernetes service port-forwarding/mapping || yes || yes || yes || yes || yes<br />
|-<br />
| Pull-through Docker mirror/proxy || yes || yes || no || yes (can reference locally available images) || yes (can reference locally available images)<br />
|--bgcolor="#eeeeee"<br />
| Custom CNI || yes (ex: calico) || yes (ex: flannel) || yes (ex: calico) || no || no<br />
|-<br />
| Features Gates || yes || yes || yes || yes (but not natively; requires hacky setup) || yes (but not natively; requires hacky setup)<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
[https://bmiguel-teixeira.medium.com/local-kubernetes-the-one-above-all-3aedbeb5f3f6 Source]<br />
<br />
==See also==<br />
* [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
* [[Kubernetes/GKE|Google Kubernetes Engine]] (GKE)<br />
* [[Kubernetes/AWS|Kubernetes on AWS]] (EKS)<br />
* [[Kubeless]]<br />
* [[Helm]]<br />
<br />
==External links==<br />
* [http://kubernetes.io/ Official website]<br />
* [https://github.com/kubernetes/kubernetes Kubernetes code] &mdash; via GitHub<br />
===Playgrounds===<br />
* [https://www.katacoda.com/courses/kubernetes/playground Kubernetes Playground]<br />
* [https://labs.play-with-k8s.com Play with k8s]<br />
===Tools===<br />
* [https://github.com/kubernetes/minikube minikube] &mdash; Run Kubernetes locally<br />
* [https://kind.sigs.k8s.io/ kind] &mdash; '''K'''ubernetes '''IN''' '''D'''ocker (local clusters for testing Kubernetes)<br />
* [https://github.com/kubernetes/kops kops] &mdash; Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management<br />
* [https://kubernetes-incubator.github.io/kube-aws kube-aws] &mdash; a command-line tool to create/update/destroy Kubernetes clusters on AWS<br />
* [https://github.com/kubernetes-incubator/kubespray kubespray] &mdash; Deploy a production ready kubernetes cluster<br />
* [https://rook.io/ Rook.io] &mdash; File, Block, and Object Storage Services for your Cloud-Native Environments<br />
===Resources===<br />
* [https://kubernetes.io/docs/getting-started-guides/scratch/ Creating a Custom Cluster from Scratch]<br />
* [https://github.com/kelseyhightower/kubernetes-the-hard-way Kubernetes The Hard Way]<br />
* [http://k8sport.org/ K8sPort]<br />
* [https://k8s.af/ Kubernetes Failure Stories]<br />
<br />
===Training===<br />
* [https://kubernetes.io/training/ Official Kubernetes Training Website]<br />
** Kubernetes and Cloud Native Associate (KCNA)<br />
** Certified Kubernetes Application Developer (CKAD)<br />
** Certified Kubernetes Administrator (CKA)<br />
** Certified Kubernetes Security Specialist (CKS) [note: Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS.]<br />
* [https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals Kubernetes Fundamentals] (LFS258)<br />
** ''[https://www.cncf.io/certification/expert/ Certified Kubernetes Administrator]'' (PKA) certification.<br />
* [https://killer.sh/ CKS / CKA / CKAD Simulator]<br />
* [https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/ 11 Ways (Not) to Get Hacked]<br />
<br />
===Blog posts===<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727 Understanding kubernetes networking: pods] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82 Understanding kubernetes networking: services] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-ingress-1bc341c84078 Understanding kubernetes networking: ingress] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b Kubernetes ConfigMaps and Secrets - Part 1] &mdash; by Sandeep Dinesh, 2017-07-13<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-part-2-3dc37111f0dc Kubernetes ConfigMaps and Secrets - Part 2] &mdash; by Sandeep Dinesh, 2017-08-08<br />
* [https://abhishek-tiwari.com/10-open-source-tools-for-highly-effective-kubernetes-sre-and-ops-teams/ 10 open-source Kubernetes tools for highly effective SRE and Ops Teams]<br />
* [https://www.ianlewis.org/en/tag/kubernetes Series of blog posts about k8s] &mdash; by Ian Lewis<br />
* [https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?] &mdash; by Sandeep Dinesh, 2018-03-11<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:DevOps]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Etcd&diff=8257Etcd2023-01-19T17:30:20Z<p>Christoph: /* External links */</p>
<hr />
<div>'''etcd''' is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. It gracefully handles leader elections during network partitions and can tolerate machine failure, even in the leader node.<br />
<br />
==Create a single-node etcd cluster running in Docker==<br />
<br />
* Start up a [[Docker]] container for the single-node etcd cluster:<br />
<pre><br />
$ export DATA_DIR="etcd-data"<br />
$ export NODE1=10.x.x.x # <= Your host IP<br />
$ REGISTRY=quay.io/coreos/etcd<br />
$ docker volume create --name etcd-data<br />
$ docker run \<br />
-p 2379:2379 \<br />
-p 2380:2380 \<br />
--volume=${DATA_DIR}:/etcd-data \<br />
--name etcd ${REGISTRY}:latest \<br />
/usr/local/bin/etcd \<br />
--data-dir=/etcd-data --name node1 \<br />
--initial-advertise-peer-urls http://${NODE1}:2380 --listen-peer-urls http://0.0.0.0:2380 \<br />
--advertise-client-urls http://${NODE1}:2379 --listen-client-urls http://0.0.0.0:2379 \<br />
--initial-cluster node1=http://${NODE1}:2380<br />
</pre><br />
<br />
* In a different shell:<br />
<pre><br />
$ docker exec etcd /bin/sh -c "export ETCDCTL_API=3 && /usr/local/bin/etcdctl member list"<br />
5ef3db6412b1adfb, started, node1, http://10.x.x.x:2380, http://10.x.x.x:2379<br />
<br />
#~OR~<br />
<br />
# Install the etcdctl binary locally:<br />
$ go get github.com/coreos/etcd/etcdctl<br />
<br />
$ ~/go/bin/etcdctl --endpoints=http://${NODE1}:2379 member list<br />
5ef3db6412b1adfb, started, node1, http://10.x.x.x:2380, http://10.x.x.x:2379, false<br />
<br />
$ ~/go/bin/etcdctl --endpoints=http://${NODE1}:2379 -w table member list<br />
+------------------+---------+-------+----------------------+----------------------+------------+<br />
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |<br />
+------------------+---------+-------+----------------------+----------------------+------------+<br />
| 5ef3db6412b1adfb | started | node1 | http://10.x.x.x:2380 | http://10.x.x.x:2379 | false |<br />
+------------------+---------+-------+----------------------+----------------------+------------+<br />
<br />
$ ~/go/bin/etcdctl --endpoints=http://${NODE1}:2379 -w table endpoint --cluster status<br />
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+<br />
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |<br />
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+<br />
| http://10.x.x.x:2379 | 5ef3db6412b1adfb | 3.3.8 | 20 kB | true | false | 4 | 9 | 0 | |<br />
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+<br />
<br />
$ ~/go/bin/etcdctl --endpoints=http://${NODE1}:2379 put foo1 bar1<br />
$ ~/go/bin/etcdctl --endpoints=http://${NODE1}:2379 put get foo1<br />
</pre><br />
<br />
==Miscellaneous==<br />
<br />
<pre><br />
$ ~/go/bin/etcdctl --write-out=table snapshot status 2020-05-05T16\:41\:38Z_etcd <br />
+----------+----------+------------+------------+<br />
| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |<br />
+----------+----------+------------+------------+<br />
| c556b896 | 20870551 | 13924 | 187 MB |<br />
+----------+----------+------------+------------+<br />
</pre><br />
<br />
==External links==<br />
* [https://etcd.io/ Official website]<br />
* [https://etcd.io/docs/v3.4/op-guide/performance/ etcd Performance Guide]<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:DevOps]]<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Containerd&diff=8256Containerd2023-01-18T21:07:41Z<p>Christoph: /* crictl */</p>
<hr />
<div>'''Containerd''' is an industry-standard core container runtime. It is currently available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system. In 2015, Docker donated the OCI Specification to The Linux Foundation with a reference implementation called runc. Since 28 February 2019, it is an official CNCF project. Its general availability and intention to donate the project to CNCF was announced by Docker in 2017.<br />
<br />
==crictl==<br />
<br />
<code>crictl</code> is a command-line interface for CRI-compatible container runtimes. You can use it to inspect and debug container runtimes and applications on a [[Kubernetes]] node. <code>crictl</code> and its source are hosted in the [https://github.com/kubernetes-sigs/cri-tools/releases cri-tools] repository.<br />
<br />
===Installing===<br />
<br />
NOTE: <code>crictl</code> requires a Linux operating system with a CRI runtime.<br />
<br />
Download a compressed archive <code>crictl</code> from the [https://github.com/kubernetes-sigs/cri-tools/releases cri-tools release page], for several different architectures. Download the version that corresponds to your version of Kubernetes. Extract it and move it to a location on your system path, such as <code>/usr/local/bin/</code>.<br />
<br />
==Containerd and Kubernetes==<br />
<br />
Kubernetes nodes use the ''container runtime'' to launch, manage, and stop containers running in Pods. The Kubernetes project is removing built-in support for the Docker runtime in Kubernetes version 1.24 and later. To achieve this, Kubernetes is removing a component called ''dockershim'', which allows Docker to communicate with Kubernetes components like the kubelet.<br />
<br />
The containerd runtime is an industry-standard container runtime that is supported by Kubernetes, and used by many other projects. The containerd runtime provides the layering abstraction that allows for the implementation of a rich set of features like gVisor and Image streaming to extend GKE functionality.<br />
<br />
The containerd runtime is considered more resource efficient and secure than the Docker runtime.<br />
<br />
==Troubleshooting containers==<br />
<br />
For debugging or troubleshooting on Linux nodes, you can interact with containerd using the portable command-line tool built for Kubernetes container runtimes: <code>crictl</code>. <code>crictl</code> supports common functionalities to view containers and images, read logs, and execute commands in the containers. Refer to the <code>crictl</code> [https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/ user guide] for the complete set of supported features and usage information.<br />
<br />
View container logs with [[systemd]]:<br />
<pre><br />
$ journalctl -u containerd<br />
</pre><br />
<br />
==See also==<br />
* [[Docker]]<br />
<br />
==External links==<br />
* [https://containerd.io/ Official website]<br />
* [https://github.com/kubernetes-sigs/cri-tools/releases Download latest release(s) of cri-tools]<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:Linux Command Line Tools]]<br />
[[Category:DevOps]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Containerd&diff=8255Containerd2023-01-18T21:07:18Z<p>Christoph: </p>
<hr />
<div>'''Containerd''' is an industry-standard core container runtime. It is currently available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system. In 2015, Docker donated the OCI Specification to The Linux Foundation with a reference implementation called runc. Since 28 February 2019, it is an official CNCF project. Its general availability and intention to donate the project to CNCF was announced by Docker in 2017.<br />
<br />
==crictl==<br />
<br />
<code>crictl</code> is a command-line interface for CRI-compatible container runtimes. You can use it to inspect and debug container runtimes and applications on a [[Kubernetes]] node. <code>crictl</code> and its source are hosted in the [https://github.com/kubernetes-sigs/cri-tools/releases cri-tools] repository.<br />
<br />
<br />
<br />
===Installing===<br />
<br />
NOTE: <code>crictl</code> requires a Linux operating system with a CRI runtime.<br />
<br />
Download a compressed archive <code>crictl</code> from the [https://github.com/kubernetes-sigs/cri-tools/releases cri-tools release page], for several different architectures. Download the version that corresponds to your version of Kubernetes. Extract it and move it to a location on your system path, such as <code>/usr/local/bin/</code>.<br />
<br />
<br />
<br />
==Containerd and Kubernetes]]<br />
<br />
Kubernetes nodes use the ''container runtime'' to launch, manage, and stop containers running in Pods. The Kubernetes project is removing built-in support for the Docker runtime in Kubernetes version 1.24 and later. To achieve this, Kubernetes is removing a component called ''dockershim'', which allows Docker to communicate with Kubernetes components like the kubelet.<br />
<br />
The containerd runtime is an industry-standard container runtime that is supported by Kubernetes, and used by many other projects. The containerd runtime provides the layering abstraction that allows for the implementation of a rich set of features like gVisor and Image streaming to extend GKE functionality.<br />
<br />
The containerd runtime is considered more resource efficient and secure than the Docker runtime.<br />
<br />
==Troubleshooting containers==<br />
<br />
For debugging or troubleshooting on Linux nodes, you can interact with containerd using the portable command-line tool built for Kubernetes container runtimes: <code>crictl</code>. <code>crictl</code> supports common functionalities to view containers and images, read logs, and execute commands in the containers. Refer to the <code>crictl</code> [https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/ user guide] for the complete set of supported features and usage information.<br />
<br />
View container logs with [[systemd]]:<br />
<pre><br />
$ journalctl -u containerd<br />
</pre><br />
<br />
==See also==<br />
* [[Docker]]<br />
<br />
==External links==<br />
* [https://containerd.io/ Official website]<br />
* [https://github.com/kubernetes-sigs/cri-tools/releases Download latest release(s) of cri-tools]<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:Linux Command Line Tools]]<br />
[[Category:DevOps]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Docker&diff=8254Docker2023-01-18T20:49:41Z<p>Christoph: /* References */</p>
<hr />
<div>'''Docker''' is an open-source project that automates the deployment of applications inside software containers. Quote of features from docker web page:<br />
:Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.<ref>https://www.docker.com/what-docker</ref><br />
<br />
==Introduction==<br />
<br />
''Note: The following is based on content found on the official [https://www.docker.com/what-container Docker website], [[:wikipedia:Docker (software)|Wikipedia]], and various other locations.''<br />
<br />
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.<br />
<br />
; Lightweight : Docker containers running on a single machine share that machine's operating system kernel; they start instantly and use less compute and RAM. Images are constructed from filesystem layers and share common files. This minimizes disk usage and image downloads are much faster.<br />
; Standard : Docker containers are based on open standards and run on all major Linux distributions, Microsoft Windows, and on any infrastructure including VMs, bare-metal and in the cloud.<br />
; Secure : Docker containers isolate applications from one another and from the underlying infrastructure. Docker provides the strongest default isolation to limit app issues to a single container instead of the entire machine.<br />
<br />
As actions are done to a Docker base image, union file-system layers are created and documented, such that each layer fully describes how to recreate an action. This strategy enables Docker's lightweight images, as only layer updates need to be propagated (compared to full VMs, for example).<br />
<br />
Building on top of facilities provided by the Linux kernel (primarily cgroups and namespaces), a Docker container, unlike a virtual machine, does not require or include a separate operating system. Instead, it relies on the kernel's functionality and uses resource isolation for CPU and memory, and separate namespaces to isolate the application's view of the operating system. Docker accesses the Linux kernel's virtualization features directly using the <code>libcontainer</code> library (written in the Go programming language).<br />
<br />
===Comparing Containers and Virtual Machines===<br />
<br />
Containers and virtual machines have similar resource isolation and allocation benefits, but function differently because containers virtualize the operating system instead of hardware. Containers are more portable and efficient.<br />
<br />
; Virtual Machines : Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries and libraries - taking up tens of GBs. VMs can also be slow to boot.<br />
; Containers : Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. Containers take up less space than VMs (container images are typically tens of MBs in size), and start almost instantly.<br />
<br />
===Components===<br />
<br />
The Docker software as a service offering consists of three components:<br />
<br />
; Software : The Docker daemon, called "<code>dockerd</code>" is a persistent process that manages Docker containers and handles container objects. The daemon listens for API requests sent by the Docker Engine API. The Docker client, which identifies itself as "<code>docker</code>", allows users to interact with Docker through CLI. It uses the Docker REST API to communicate with one or more Docker daemons.<br />
; Objects : Docker objects refer to different entities used to assemble an application in Docker. The main Docker objects are images, containers, and services.<br />
:* A Docker container is a standardized, encapsulated environment that runs applications. A container is managed using the Docker API or CLI.<br />
:* A Docker image is a read-only template used to build containers. Images are used to store and ship applications.<br />
:* A Docker service allows containers to be scaled across multiple Docker daemons. The result is known as a "swarm", cooperating daemons that communicate through the Docker API.<br />
; Registries : A Docker registry is a repository for Docker images. Docker clients connect to registries to download ("pull") images for use or upload ("push") images that they have built. Registries can be public or private. Two main public registries are Docker Hub and Docker Cloud. Docker Hub is the default registry where Docker looks for images.<br />
<br />
==Docker commands==<br />
<br />
I will provide detailed examples on all of the following commands throughout this article.<br />
<br />
; Basics<br />
<br />
The following are the most common Docker commands (i.e., the ones you will most likely use the most day-to-day):<br />
<br />
* Show all running containers:<br />
$ docker ps<br />
* Show all containers (including stopped and failed ones):<br />
$ docker ps -a<br />
* Show all images in your local repository:<br />
$ docker images<br />
* Create an image based on the instructions in a <code>Dockerfile</code>:<br />
$ docker build<br />
* Start a container from an image (either from your local repository or from a remote repository {e.g., Docker Hub}):<br />
$ docker run<br />
* Remove/delete all ''stopped''/''failed'' containers (leaves running containers alone):<br />
$ docker rm $(docker ps -a -q)<br />
<br />
===Container commands===<br />
<br />
; Container lifecycle<br />
<br />
* Create a container but do not start it:<br />
$ docker create<br />
* Rename a container:<br />
$ docker rename<br />
* Create ''and'' start a container in one operation:<br />
$ docker run<br />
* Delete a container:<br />
$ docker rm<br />
* Update a container's resource limits:<br />
$ docker update<br />
<br />
; Starting and stopping containers<br />
<br />
* Start a container:<br />
$ docker start<br />
* Stop a running container:<br />
$ docker stop<br />
* Stop and start start a container:<br />
$ docker restart<br />
* Pause a running container ("freeze" it in place):<br />
$ docker pause<br />
* Un-pause a paused container:<br />
$ docker unpause<br />
* Attach/connect to a running container:<br />
$ docker attach<br />
* Block until running container stops (and print exit code):<br />
$ docker wait<br />
* Send <code>SIGKILL</code> to a running container:<br />
$ docker kill<br />
<br />
; Information<br />
<br />
* Show all ''running'' containers:<br />
$ docker ps<br />
* Get the logs for a given container:<br />
$ docker logs<br />
* Get all of the metadata about a container (e.g., IP address, etc.):<br />
$ docker inspect<br />
* Get real-time events from Docker Engine (e.g., start/stop containers, attach, create, etc.):<br />
$ docker events<br />
* Get the public-facing ports of a given container:<br />
$ docker port<br />
* Show running processes in a given container:<br />
$ docker top<br />
* Show a given container's resource usage statistics:<br />
$ docker stats<br />
* Show changed files in the container's filesystem (i.e., those changed from the original base image):<br />
$ docker diff<br />
<br />
; Miscellaneous<br />
<br />
* Get the environment variables for a given container:<br />
$ docker run ubuntu env<br />
* IP address of host machine:<br />
$ ip -4 -o addr show eth0<br />
2: eth0 inet 10.0.0.166/23<br />
* IP address of a container:<br />
$ docker run ubuntu ip -4 -o addr show eth0<br />
2: eth0 inet 172.17.0.2/16<br />
<br />
===Image commands===<br />
<br />
; Lifecycle<br />
* Show all images in your local repository:<br />
$ docker images<br />
* Create an image from a tarball:<br />
$ docker import<br />
* Create an image from a <code>Dockerfile</code><br />
$ docker build<br />
* Create an image from a container (note: it will pause the container, if it is running, during the commit process):<br />
$ docker commit<br />
* Remove/delete an image:<br />
$ docker rmi<br />
* Load an image from a tarball as STDIN (including images and tags):<br />
$ docker load<br />
* Save an image to a tarball (streamed to STDOUT with all parents lays, tags, and versions):<br />
$ docker save<br />
<br />
; Info<br />
<br />
* Show the history of an image:<br />
$ docker history<br />
* Tag an image:<br />
$ docker tag<br />
<br />
==Dockerfile directives==<br />
<br />
=== USER ===<br />
<pre><br />
$ cat << EOF > Dockerfile<br />
# Non-privileged user entry<br />
FROM centos:latest<br />
MAINTAINER xtof@example.com<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
USER xtof<br />
EOF<br />
</pre><br />
''Note: The use of <code>MAINTAINER</code> has been deprecated in newer versions of Docker. You should use <code>LABEL</code> instead, as it is much more flexible and its key/values show up in <code>docker inspect</code>. From here forward, I will only use <code>LABEL</code>.''<br />
<br />
$ docker build -t centos7/nonroot:v1 .<br />
$ docker exec -it <container_name> /bin/bash<br />
<br />
We are user "xtof" and are unable to become root. The workaround (i.e., how to become root) is like so:<br />
<br />
$ docker exec -u 0 -it <container_name> /bin/bash<br />
<br />
''NOTE: For the remainder of this section, I will omit the <code>$ cat << EOF > Dockerfile</code> part in the examples for brevity.''<br />
<br />
=== RUN ===<br />
<br />
Notes on the order of execution<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
USER xtof<br />
<br />
RUN echo "export PATH=/path/to/my/app:$PATH" >> /etc/bashrc<br />
</pre><br />
<br />
$ docker build -t centos7/config:v1 .<br />
...<br />
/bin/sh: /etc/bashrc: Permission denied<br />
<br />
The order of execution matters! Prior to the directive <code>USER xtof</code>, the user was root. After that directive, the user is now xtof, who does not have super-user privileges. Move the <code>RUN echo ...</code> directive to before the <code>USER xtof</code> directive for a successful build.<br />
<br />
=== ENV ===<br />
''Note: The following is a _terrible_ way of building a container. I am purposely doing it this way so I can show you a much better way later (see below).''<br />
<br />
* Build a CentOS 7 Docker image with Java 8 installed:<br />
<pre><br />
# SEE: https://gist.github.com/P7h/9741922 for various Java versions<br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN yum update -y<br />
RUN yum install -y net-tools wget<br />
<br />
RUN echo "SETTING UP JAVA"<br />
# The tarball method:<br />
#RUN cd ~ && wget --no-cookies --no-check-certificate \<br />
# --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \<br />
# "http://download.oracle.com/otn-pub/java/jdk/8u91-b14/jdk-8u91-linux-x64.tar.gz"<br />
#RUN tar xzvf jdk-8u91-linux-x64.tar.gz<br />
#RUN mv jdk1.8.0_91 /opt<br />
#ENV JAVA_HOME /opt/jdk1.8.0_91/<br />
<br />
# The rpm method:<br />
RUN cd ~ && wget --no-cookies --no-check-certificate \<br />
--header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \<br />
"http://download.oracle.com/otn-pub/java/jdk/8u161-b12/2f38c3b165be4555a1fa6e98c45e0808/jdk-8u161-linux-x64.rpm"<br />
RUN yum localinstall -y /root/jdk-8u161-linux-x64.rpm<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
USER xtof<br />
<br />
# User specific environment variable<br />
RUN cd ~ && echo "export JAVA_HOME=/usr/java/jdk1.8.0_161/jre" >> ~/.bashrc<br />
# Global (system-wide) environment variable<br />
ENV JAVA_BIN /usr/java/jdk1.8.0_161/jre/bin<br />
</pre><br />
<br />
$ docker build -t centos7/java8:v1 .<br />
<br />
=== CMD vs. RUN ===<br />
<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
CMD ["echo", "Hello from within my container"]<br />
</pre><br />
<br />
The <code>CMD</code> directive ''only'' executes when the container is started, whereas the <code>RUN</code> directive is executed during the build of the image.<br />
<br />
$ docker build -t centos7/echo:v1 .<br />
$ docker run centos7/echo:v1<br />
Hello from within my container<br />
<br />
The container starts, echos out that message, then exits.<br />
<br />
=== ENTRYPOINT ===<br />
<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
ENTRYPOINT "This command will display this message on EVERY container that is run from it"<br />
</pre><br />
<br />
$ docker build -t centos7/entry:v1 .<br />
$ docker run centos7/entry:v1<br />
This command will display this message on EVERY container that is run from it<br />
$ docker run centos7/entry:v1 /bin/echo "Can you see me?"<br />
This command will display this message on EVERY container that is run from it<br />
$ docker run centos7/echo:v1 /bin/echo "Can you see me?"<br />
Can you see me?<br />
<br />
Note the difference.<br />
<br />
=== EXPOSE ===<br />
<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN yum update -y<br />
RUN yum install -y httpd net-tools<br />
<br />
RUN echo "This is a custom index file built during the image creation" > /var/www/html/index.html<br />
<br />
ENTRYPOINT apachectl -DFOREGROUND # BAD WAY TO DO THIS!<br />
</pre><br />
<br />
$ docker build -t centos7/apache:v1 .<br />
$ docker run -d --name webserver centos7/apache:v1<br />
$ docker exec webserver /bin/cat /var/www/html/index.html<br />
This is a custom index file built during the image creation<br />
$ docker inspect webserver -f '<nowiki>{{.NetworkSettings.IPAddress}}</nowiki>' # => 172.17.0.6<br />
#~OR~<br />
$ docker inspect webserver | jq -crM '.[] | .NetworkSettings.IPAddress' # => 172.17.0.6<br />
$ curl 172.17.0.6<br />
This is a custom index file built during the image creation<br />
$ curl -sI 172.17.0.6 | awk '/^HTTP|^Server/{print}'<br />
HTTP/1.1 200 OK<br />
Server: Apache/2.4.6 (CentOS)<br />
$ time docker stop webserver<br />
real 0m10.275s # <- notice how long it took to stop the container<br />
user 0m0.008s<br />
sys 0m0.000s<br />
$ docker rm webserver<br />
<br />
It took ~10 seconds to stop the above container. This is because of the way we are (incorrectly) using <code>ENTRYPOINT</code>. The <code>SIGTERM</code> signal when running <code>`docker stop webserver`</code> actually timed out instead of exiting gracefully. A much better method is shown below, which ''will'' exit gracefully and in less than 300 ms.<br />
<br />
* Expose ports from the CLI<br />
$ docker run -d --name webserver -p 8080:80 centos7/apache:v1<br />
$ curl localhost:8080<br />
This is a custom index file built during the image creation<br />
$ docker stop webserver && docker rm webserver<br />
<br />
* Explicitly expose a port in the Docker image:<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN yum update -y && \<br />
yum install -y httpd net-tools && \<br />
yum autoremove -y && \<br />
echo "This is a custom index file built during the image creation" > /var/www/html/index.html<br />
<br />
EXPOSE 80<br />
<br />
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]<br />
</pre><br />
<br />
$ docker build -t centos7/apache:v1 .<br />
$ docker run -d --rm --name webserver -P centos7/apache:v1<br />
$ docker container ls --format '<nowiki>{{.Names}} {{.Ports}}</nowiki>'<br />
webserver 0.0.0.0:32769->80/tcp<br />
#~OR~<br />
$ docker port webserver | cut -d: -f2<br />
32769<br />
#~OR~<br />
$ docker inspect webserver | jq -crM '[.[] | .NetworkSettings.Ports."80/tcp"[] | .HostPort] | .[]'<br />
32769<br />
$ curl localhost:32769<br />
This is a custom index file built during the image creation<br />
$ time docker stop webserver<br />
real 0m0.283s<br />
user 0m0.004s<br />
sys 0m0.008s<br />
<br />
Note that I passed <code>--rm</code> to the <code>`docker run`</code> command so that the container will be removed when I stop the container. Also note how much faster the container stopped (~300ms vs. 10 seconds above).<br />
<br />
==Container volume management==<br />
<br />
$ docker run -it --name voltest -v /mydata centos:latest /bin/bash<br />
[root@bffdcb88c485 /]# df -h<br />
Filesystem Size Used Avail Use% Mounted on<br />
none 213G 173G 30G 86% /<br />
tmpfs 7.8G 0 7.8G 0% /dev<br />
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup<br />
/dev/mapper/ubuntu--vg-root 213G 173G 30G 86% /mydata<br />
shm 64M 0 64M 0% /dev/shm<br />
tmpfs 7.8G 0 7.8G 0% /sys/firmware<br />
[root@bffdcb88c485 /]# echo "testing" >/mydata/mytext.txt<br />
$ docker inspect voltest | jq -crM '.[] | .Mounts[].Source'<br />
/var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data<br />
$ sudo cat /var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data/mytext.txt<br />
testing<br />
$ sudo /bin/bash -c \<br />
"echo 'this is from the host OS' >/var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data/host.txt"<br />
[root@bffdcb88c485 /]# cat /mydata/host.txt <br />
this is from the host OS<br />
<br />
* Cleanup<br />
$ docker rm voltest<br />
$ docker volume rm 2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b<br />
<br />
* Mount host's current working directory inside container:<br />
$ echo "my config" >my.conf<br />
$ echo "my message" >message.txt<br />
$ echo "aerwr3adf" >app.bin<br />
$ chmod +x app.bin<br />
$ docker run -it --name voltest -v ${PWD}:/mydata centos:latest /bin/bash<br />
[root@f5f34ccb54fb /]# ls -l /mydata/<br />
total 24<br />
-rwxrwxr-x 1 1000 1000 10 Mar 8 19:29 app.bin<br />
-rw-rw-r-- 1 1000 1000 11 Mar 8 19:29 message.txt<br />
-rw-rw-r-- 1 1000 1000 10 Mar 8 19:29 my.conf<br />
[root@f5f34ccb54fb /]# touch /mydata/foobar<br />
$ ls -l ${PWD}<br />
total 24<br />
-rwxrwxr-x 1 xtof xtof 10 Mar 8 11:29 app.bin<br />
-rw-r--r-- 1 root root 0 Mar 8 11:36 foobar<br />
-rw-rw-r-- 1 xtof xtof 11 Mar 8 11:29 message.txt<br />
-rw-rw-r-- 1 xtof xtof 10 Mar 8 11:29 my.conf<br />
$ docker rm voltest<br />
<br />
==Images==<br />
<br />
===Saving and loading images===<br />
<br />
$ docker pull centos:latest<br />
$ docker run -it centos:latest /bin/bash<br />
[root@29fad368048c /]# yum update -y<br />
[root@29fad368048c /]# echo xtof >/root/built_by.txt<br />
$ docker commit reverent_elion centos:xtof<br />
$ docker rm reverent_elion<br />
$ docker images<br />
REPOSITORY TAG IMAGE ID CREATED SIZE<br />
centos xtof e0c8bd35ba50 3 seconds ago 463MB<br />
centos latest 980e0e4c79ec 1 minute ago 197MB<br />
$ docker history centos:xtof<br />
IMAGE CREATED CREATED BY SIZE<br />
e0c8bd35ba50 27 seconds ago /bin/bash 266MB <br />
980e0e4c79ec 18 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B <br />
<missing> 18 months ago /bin/sh -c #(nop) LABEL name=CentOS Base ... 0B <br />
<missing> 18 months ago /bin/sh -c #(nop) ADD file:e336b45186086f7... 197MB <br />
<missing> 18 months ago /bin/sh -c #(nop) MAINTAINER <nowiki>https://gith...</nowiki> 0B<br />
<br />
* Save the original <code>centos:latest</code> image we pulled from Docker Hub:<br />
$ docker save --output centos-latest.tar centos:latest<br />
<br />
Note that the above command essentially tars up the contents of the image found in <code>/var/lib/docker/image</code> directory.<br />
<br />
$ tar tvf centos-latest.tar <br />
-rw-r--r-- 0/0 2309 2016-09-06 14:10 980e0e4c79ec933406e467a296ce3b86685e6b42eed2f873745e6a91d718e37a.json<br />
drwxr-xr-x 0/0 0 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/<br />
-rw-r--r-- 0/0 3 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/VERSION<br />
-rw-r--r-- 0/0 1391 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/json<br />
-rw-r--r-- 0/0 204305920 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/layer.tar<br />
-rw-r--r-- 0/0 202 1969-12-31 16:00 manifest.json<br />
-rw-r--r-- 0/0 89 1969-12-31 16:00 repositories<br />
<br />
* Save space by compressing the tar file:<br />
$ gzip centos-latest.tar # .tar -> 195M; .tar.gz -> 68M<br />
<br />
* Delete the original <code>centos:latest</code> image:<br />
$ docker rmi centos:latest<br />
<br />
* Restore (or load) the image back to our local repository:<br />
$ docker load --input centos-latest.tar.gz<br />
<br />
===Tagging images===<br />
<br />
* List our current images:<br />
$ docker images<br />
REPOSITORY TAG IMAGE ID CREATED SIZE<br />
centos xtof e0c8bd35ba50 About an hour ago 463MB<br />
<br />
* Tag the above image:<br />
$ docker tag e0c8bd35ba50 xtof/centos:v1<br />
$ docker images<br />
REPOSITORY TAG IMAGE ID CREATED SIZE<br />
centos xtof e0c8bd35ba50 About an hour ago 463MB<br />
xtof/centos v1 e0c8bd35ba50 About an hour ago 463MB<br />
<br />
Note that we did not create a new image, we just created a new tag of the same/original <code>centos:xtof</code> image.<br />
<br />
Note: The maximum number of characters in a tag is 128.<br />
<br />
==Docker networking==<br />
<br />
===Default networks===<br />
$ ip addr show docker0<br />
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default <br />
link/ether 02:42:c0:75:70:13 brd ff:ff:ff:ff:ff:ff<br />
inet 172.17.0.1/16 scope global docker0<br />
valid_lft forever preferred_lft forever<br />
inet6 fe80::42:c0ff:fe75:7013/64 scope link <br />
valid_lft forever preferred_lft forever<br />
#~OR~<br />
$ ifconfig docker0<br />
docker0 Link encap:Ethernet HWaddr 02:42:c0:75:70:13 <br />
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0<br />
inet6 addr: fe80::42:c0ff:fe75:7013/64 Scope:Link<br />
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1<br />
RX packets:420654 errors:0 dropped:0 overruns:0 frame:0<br />
TX packets:1162975 errors:0 dropped:0 overruns:0 carrier:0<br />
collisions:0 txqueuelen:0 <br />
RX bytes:85851647 (85.8 MB) TX bytes:1196235716 (1.1 GB)<br />
<br />
$ docker network inspect bridge | jq '.[] | .IPAM.Config[].Subnet'<br />
"172.17.0.0/16"<br />
So, the usable range of IP addresses in our 172.17.0.0/16 subnet is: 172.17.0.1 - 172.17.255.254<br />
<br />
$ docker network ls<br />
NETWORK ID NAME DRIVER SCOPE<br />
bf831059febc bridge bridge local<br />
266f6df5c44e host host local<br />
ce79e4043a20 none null local<br />
$ docker ps -q | wc -l<br />
#~OR~<br />
$ docker container ls --format '<nowiki>{{.Names}}</nowiki>' | wc -l<br />
4 # => 4 running containers<br />
$ docker network inspect bridge | jq '.[] | .Containers[].IPv4Address'<br />
"172.17.0.2/16"<br />
"172.17.0.5/16"<br />
"172.17.0.4/16"<br />
"172.17.0.3/16"<br />
The output from the last command are the IP addresses of the 4 containers currently running on my host.<br />
<br />
===Custom networks===<br />
* Create a Docker network<br />
$ man docker-network-create # for details<br />
$ docker network create --subnet 10.1.0.0/16 --gateway 10.1.0.1 --ip-range=10.1.4.0/24 \<br />
--driver=bridge --label=host4network br04<br />
<br />
* Use the above network with a given container:<br />
$ docker run -it --name net-test --net br04 centos:latest /bin/bash<br />
<br />
* Assign a static IP to a given container in the above (user created) network:<br />
$ docker run -it --name net-test --net br04 --ip 10.1.4.100 centos:latest /bin/bash<br />
<br />
Note: You can ''only'' assign static IPs to user created networks (i.e., you ''cannot'' assign them to the default "bridge" network).<br />
<br />
==Monitoring==<br />
<br />
$ docker top <container_name><br />
$ docker stats <container_name><br />
<br />
===Logs===<br />
<br />
* Fetch logs of a given container:<br />
$ docker logs <container_name><br />
<br />
* Fetch logs of a given container prefixed with timestamps (UTC format by default):<br />
$ docker logs --timestamps <container_name><br />
<br />
===Events===<br />
$ docker events<br />
$ docker events --since '1h'<br />
$ docker events --since '2018-03-08T16:00'<br />
$ docker events --filter event=attach<br />
$ docker events --filter event=destroy<br />
$ docker events --filter event=attach --filter event=die --filter event=stop<br />
<br />
==Cleanup==<br />
<br />
* Check local system disk usage:<br />
<pre><br />
$ docker system df<br />
TYPE TOTAL ACTIVE SIZE RECLAIMABLE<br />
Images 53 3 16.52GB 15.9GB (96%)<br />
Containers 3 1 438.9MB 0B (0%)<br />
Local Volumes 16 2 2.757GB 2.628GB (95%)<br />
Build Cache 0 0 0B 0B<br />
</pre><br />
<br />
Note: Use <code>docker system df --verbose</code> to get even more details.<br />
<br />
* Delete all stopped containers at once and reclaim the disk space they are using:<br />
$ docker container prune<br />
<br />
* Remove all containers (both the running ones and the stopped ones):<br />
<pre><br />
# Old method:<br />
$ docker rm -f $(docker ps -aq)<br />
# New method:<br />
$ docker container rm -f $(docker container ls -aq)<br />
</pre><br />
Note: It is often useful to use the <code>--rm</code> flag when running a container so that it is automatically removed when its PID 1 process is stopped, thus releasing unused disk immediately.<br />
<br />
* Cleanup everything all at one ('''CAREFUL!'''):<br />
<pre><br />
$ docker system prune<br />
WARNING! This will remove:<br />
- all stopped containers<br />
- all networks not used by at least one container<br />
- all dangling images<br />
- all dangling build cache<br />
Are you sure you want to continue? [y/N]<br />
</pre><br />
<br />
==Examples==<br />
<br />
===Simple Nginx server===<br />
<br />
* Create an index.html file:<br />
<pre><br />
$ mkdir html<br />
$ cat << EOF >html/index.html<br />
Hello from Docker<br />
EOF<br />
</pre><br />
<br />
* Create a Dockerfile:<br />
<pre><br />
FROM nginx<br />
COPY html /usr/share/nginx/html<br />
</pre><br />
<br />
* Build the image:<br />
$ docker build -t test-nginx .<br />
<br />
* Start up container, using image built above:<br />
$ docker run --name check-nginx -d -p 8080:80 test-nginx<br />
<br />
* Check that it works:<br />
$ curl <nowiki>http://localhost:8080</nowiki><br />
Hello from Docker<br />
<br />
===Connecting two containers===<br />
<br />
In this example, we will start up a Postgres container and then start up another container and make a connection to the original Postgres container:<br />
<br />
$ docker pull postgres<br />
$ docker run --name test-postgres -e POSTGRES_PASSWORD=mypassword -d postgres<br />
$ docker run -it --rm --link test-postgres:postgres postgres psql -h postgres -U postgres<br />
<pre><br />
Password for user postgres:<br />
psql (11.0 (Debian 11.0-1.pgdg90+2))<br />
Type "help" for help.<br />
<br />
postgres=# SELECT 1;<br />
?column?<br />
----------<br />
1<br />
(1 row)<br />
<br />
postgres=# \q<br />
</pre><br />
<br />
Connection was successful!<br />
<br />
===Support for various hardware platforms===<br />
<br />
NOTE: If your image is being created on an M1 chip (ARM64) but you want to execute the container on an AMD64 chip, then use <code>FROM - platform=linux/amd64</code> on your Docker image so it can be shipped anywhere. For example:<br />
<pre><br />
FROM node:current-alpine3.15<br />
#FROM - platform=linux/amd64 node:current-alpine3.15<br />
WORKDIR /app<br />
ADD . /app<br />
RUN npm install<br />
#RUN npm install express<br />
EXPOSE 3000<br />
CMD ["npm", "start"]<br />
</pre><br />
<br />
==Docker compose==<br />
<br />
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the [https://docs.docker.com/compose/overview/#features list of features].<br />
<br />
Using Compose is basically a three-step process:<br />
# Define your app's environment with a <code>Dockerfile</code>. so it can be reproduced anywhere.<br />
# Define the services that make up your app in <code>docker-compose.yml</code> so they can be run together in an isolated environment.<br />
# Run <code>docker-compose up</code> and Compose starts and runs your entire app.<br />
<br />
===Basic example===<br />
<br />
''Note: This is based off of [https://docs.docker.com/compose/gettingstarted/ this article].''<br />
<br />
In this basic example, we will build a simple Python web application running on Docker Compose. The application uses the Flask framework and maintains a hit counter in Redis.<br />
<br />
''Note: This section assumes you already have Docker Engine and [https://docs.docker.com/compose/install/#install-compose Docker Compose] installed.''<br />
<br />
* Create a directory for the project:<br />
$ mkdir compose-test && cd $_<br />
<br />
* Create a file called <code>app.py</code> in your project directory and paste this in:<br />
<pre><br />
import time<br />
import redis<br />
from flask import Flask<br />
<br />
<br />
app = Flask(__name__)<br />
cache = redis.Redis(host='redis', port=6379)<br />
<br />
<br />
def get_hit_count():<br />
retries = 5<br />
while True:<br />
try:<br />
return cache.incr('hits')<br />
except redis.exceptions.ConnectionError as exc:<br />
if retries == 0:<br />
raise exc<br />
retries -= 1<br />
time.sleep(0.5)<br />
<br />
<br />
@app.route('/')<br />
def hello():<br />
count = get_hit_count()<br />
return 'Hello World! I have been seen {} times.\n'.format(count)<br />
<br />
if __name__ == "__main__":<br />
app.run(host="0.0.0.0", debug=True)<br />
</pre><br />
<br />
In this example, <code>redis</code> is the hostname of the redis container on the application's network. We use the default port for Redis: <code>6379</code>.<br />
<br />
* Create another file called <code>requirements.txt</code> in your project directory and paste this in:<br />
flask<br />
redis<br />
<br />
* Create a Dockerfile<br />
*: This Dockerfile will be used to build an image that contains all the dependencies the Python application requires, including Python itself.<br />
<pre><br />
FROM python:3.4-alpine<br />
ADD . /code<br />
WORKDIR /code<br />
RUN pip install -r requirements.txt<br />
CMD ["python", "app.py"]<br />
</pre><br />
<br />
* Create a file called <code>docker-compose.yml</code> in your project directory and paste the following:<br />
<pre><br />
version: '3'<br />
services:<br />
web:<br />
build: .<br />
ports:<br />
- "5000:5000"<br />
redis:<br />
image: "redis:alpine"<br />
</pre><br />
<br />
* Build and run this app with Docker Compose:<br />
$ docker-compose up<br />
<br />
Compose pulls a Redis image, builds an image for your code, and starts the services you defined. In this case, the code is statically copied into the image at build time.<br />
<br />
* Test the application:<br />
$ curl localhost:5000<br />
Hello World! I have been seen 1 times.<br />
<br />
$ for i in $(seq 1 10); do curl -s localhost:5000; done<br />
Hello World! I have been seen 2 times.<br />
Hello World! I have been seen 3 times.<br />
Hello World! I have been seen 4 times.<br />
Hello World! I have been seen 5 times.<br />
Hello World! I have been seen 6 times.<br />
Hello World! I have been seen 7 times.<br />
Hello World! I have been seen 8 times.<br />
Hello World! I have been seen 9 times.<br />
Hello World! I have been seen 10 times.<br />
Hello World! I have been seen 11 times.<br />
<br />
* List containers:<br />
<pre><br />
$ docker-compose ps<br />
Name Command State Ports <br />
-------------------------------------------------------------------------------------<br />
compose-test_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp <br />
compose-test_web_1 python app.py Up 0.0.0.0:5000->5000/tcp<br />
</pre><br />
<br />
* Display the running processes:<br />
<pre><br />
$ docker-compose top<br />
compose-test_redis_1<br />
UID PID PPID C STIME TTY TIME CMD <br />
--------------------------------------------------------------------<br />
systemd+ 29401 29367 0 15:28 ? 00:00:00 redis-server <br />
<br />
compose-test_web_1<br />
UID PID PPID C STIME TTY TIME CMD <br />
--------------------------------------------------------------------------------<br />
root 29407 29373 0 15:28 ? 00:00:00 python app.py <br />
root 29545 29407 0 15:28 ? 00:00:00 /usr/local/bin/python app.py<br />
</pre><br />
<br />
* Shutdown app:<br />
$ Ctrl+C<br />
#~OR~<br />
$ docker-compose down<br />
<br />
==Install docker==<br />
<br />
===Debian-based distros===<br />
<br />
; Ubuntu 16.04 (Xenial Xerus)<br />
''Note: For this install, I will be using Ubuntu 16.04 LTS (Xenial Xerus). Docker requires a 64-bit version of Ubuntu as well as a kernel version equal to or greater than 3.10. My system satisfies both requirements.''<br />
<br />
* Setup the docker repo to install from:<br />
$ sudo apt-get update -y<br />
$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D<br />
$ echo "deb <nowiki>https://apt.dockerproject.org/repo ubuntu-xenial main</nowiki>" | sudo tee /etc/apt/sources.list.d/docker.list<br />
$ sudo apt-get update -y<br />
<br />
Make sure you are about to install from the Docker repo instead of the default Ubuntu 16.04 repo:<br />
<br />
$ apt-cache policy docker-engine<br />
<br />
The output of the above command show look something like the following:<br />
<pre><br />
docker-engine:<br />
Installed: (none)<br />
Candidate: 17.05.0~ce-0~ubuntu-xenial<br />
Version table:<br />
17.05.0~ce-0~ubuntu-xenial 500<br />
500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages<br />
17.04.0~ce-0~ubuntu-xenial 500<br />
500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages<br />
...<br />
</pre><br />
<br />
* Install docker:<br />
$ sudo apt-get install -y docker-engine<br />
<br />
; Ubuntu 18.04 (Bionic Beaver)<br />
<br />
$ sudo apt update<br />
$ sudo apt install -y apt-transport-https ca-certificates curl software-properties-common<br />
$ curl -fsSL <nowiki>https://download.docker.com/linux/ubuntu/gpg</nowiki> | sudo apt-key add -<br />
$ sudo add-apt-repository "deb [arch=amd64] <nowiki>https://download.docker.com/linux/ubuntu</nowiki> $(lsb_release -cs) stable"<br />
$ sudo apt update<br />
$ apt-cache policy docker-ce<br />
<pre><br />
docker-ce:<br />
Installed: (none)<br />
Candidate: 5:18.09.0~3-0~ubuntu-bionic<br />
Version table:<br />
5:18.09.0~3-0~ubuntu-bionic 500<br />
500 <nowiki>https://download.docker.com/linux/ubuntu</nowiki> bionic/stable amd64 Packages<br />
</pre><br />
<br />
$ sudo apt install docker-ce -y<br />
$ sudo systemctl status docker<br />
<pre><br />
● docker.service - Docker Application Container Engine<br />
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)<br />
Active: active (running) since Tue 2018-12-04 13:40:36 PST; 4s ago<br />
Docs: https://docs.docker.com<br />
Main PID: 6134 (dockerd)<br />
Tasks: 16<br />
CGroup: /system.slice/docker.service<br />
└─6134 /usr/bin/dockerd -H unix://<br />
</pre><br />
<br />
===Red Hat-based distros===<br />
''Note: For this install, I will be using CentOS 7 (release 7.2.1511). Docker requires a 64-bit version of CentOS as well as a kernel version equal to or greater than 3.10. My system satisfies both requirements.''<br />
<br />
* Install Docker (the fast way):<br />
$ sudo yum update -y<br />
$ curl -fsSL <nowiki>https://get.docker.com/</nowiki> | sh<br />
<br />
* Install Docker (via a yum repo):<br />
$ sudo yum update -y<br />
$ sudo pip install docker-py<br />
$ cat << EOF > /etc/yum.repos.d/docker.repo<br />
[dockerrepo]<br />
name=Docker Repository<br />
baseurl=<nowiki>https://yum.dockerproject.org/repo/main/centos/7/</nowiki><br />
enabled=1<br />
gpgcheck=1<br />
gpgkey=<nowiki>https://yum.dockerproject.org/gpg</nowiki><br />
EOF<br />
$ sudo rpm -vv --import <nowiki>https://yum.dockerproject.org/gpg</nowiki><br />
$ sudo yum update -y<br />
$ sudo yum install docker-engine -y<br />
<br />
===Post-installation steps===<br />
* Check on the status of docker:<br />
$ sudo systemctl status docker<br />
<pre><br />
● docker.service - Docker Application Container Engine<br />
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)<br />
Active: active (running) since Tue 2016-07-12 12:31:08 PDT; 6s ago<br />
Docs: https://docs.docker.com<br />
Main PID: 3392 (docker)<br />
CGroup: /system.slice/docker.service<br />
├─3392 /usr/bin/docker daemon -H fd://<br />
└─3411 docker-containerd -l /var/run/docker/libcontainerd/docker-containerd.sock --runtime docker-runc --start-timeout 2m<br />
</pre><br />
<br />
* Make sure the docker service automatically starts after a machine reboot:<br />
$ sudo systemctl enable docker<br />
<br />
* Execute docker without <code>`sudo`</code>:<br />
$ sudo usermod -aG docker $(whoami)<br />
#~OR~<br />
$ sudo usermod -aG docker $USER<br />
Log out and log back in to use docker without <code>`sudo`</code>.<br />
<br />
* Check version of Docker installed:<br />
<pre><br />
$ docker version<br />
Client:<br />
Version: 17.05.0-ce<br />
API version: 1.29<br />
Go version: go1.7.5<br />
Git commit: 89658be<br />
Built: Thu May 4 22:10:54 2017<br />
OS/Arch: linux/amd64<br />
<br />
Server:<br />
Version: 17.05.0-ce<br />
API version: 1.29 (minimum version 1.12)<br />
Go version: go1.7.5<br />
Git commit: 89658be<br />
Built: Thu May 4 22:10:54 2017<br />
OS/Arch: linux/amd64<br />
Experimental: false<br />
</pre><br />
<br />
* Check that docker has been successfully installed and configured:<br />
$ docker run hello-world<br />
<pre><br />
...<br />
This message shows that your installation appears to be working correctly.<br />
...<br />
</pre><br />
<br />
As the above message shows, you now have a successful install of Docker on your machine and are ready to start building images and creating containers.<br />
<br />
==Miscellaneous==<br />
<br />
* Get the hostname of the host the Docker Engine is running on:<br />
$ docker info -f '<nowiki>{{ .Name }}</nowiki>'<br />
<br />
* Get the number of stopped containers:<br />
$ docker info --format '<nowiki>{{json .}}</nowiki>' | jq '.ContainersStopped'<br />
3<br />
<br />
* Get the number of images in the local registry:<br />
$ docker info --format '<nowiki>{{json .}}</nowiki>' | jq '.Images'<br />
92<br />
<br />
* Verify the Docker service is running:<br />
<pre><br />
$ curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping<br />
OK<br />
</pre><br />
<br />
* Show docker disk usage<br />
<pre><br />
$ docker system df<br />
TYPE TOTAL ACTIVE SIZE RECLAIMABLE<br />
Images 84 11 25.01GB 20.44GB (81%)<br />
Containers 20 0 768.1MB 768.1MB (100%)<br />
Local Volumes 16 2 2.693GB 2.628GB (97%)<br />
Build Cache 0 0 0B 0B<br />
</pre><br />
<br />
* Just ''just'' the version of Docker installed:<br />
<pre><br />
$ docker version --format '{{.Server.Version}}'<br />
20.10.7<br />
$ docker version --format '{{.Server.Version}}' 2>/dev/null || docker -v | awk '{gsub(/,/, "", $3); print $3}'<br />
20.10.7<br />
</pre><br />
<br />
==Install your own Docker private registry==<br />
''Note: I will use CentOS 7 for this install and assume you already have docker and docker-compose installed (see above).''<br />
<br />
For this install, I will assume you have a domain name registered somewhere. I will use <code>docker.example.com</code> as my example domain. Replace anywhere you see that below with your actual domain name.<br />
<br />
* Install dependencies:<br />
$ yum install -y nginx # used for the registry endpoint<br />
$ yum install -y httpd-tools # for the htpasswd utility<br />
<br />
* Setup docker registry directory structure:<br />
$ mkdir -p /opt/docker-registry/{data,nginx{/conf.d,/certs},log}<br />
$ cd /opt/docker-registry<br />
<br />
* Create a docker-compose file:<br />
$ vim docker-compose.yml # and add the following:<br />
<br />
<pre><br />
nginx:<br />
image: "nginx:1.9"<br />
ports:<br />
- 5043:443<br />
links:<br />
- registry:registry<br />
volumes:<br />
- ./log/nginx/:/var/log/nginx:rw<br />
- ./nginx/conf.d:/etc/nginx/conf.d:ro<br />
- ./nginx/certs:/etc/nginx/certs:ro<br />
registry:<br />
image: registry:2<br />
ports:<br />
- 127.0.0.1:5000:5000<br />
environment:<br />
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data<br />
volumes:<br />
- ./data:/data<br />
</pre><br />
<br />
* Create an Nginx configuration file:<br />
$ vim /opt/docker-registry/nginx/conf.d/registry.conf # and add the following:<br />
<br />
<pre><br />
upstream docker-registry {<br />
server registry:5000;<br />
}<br />
<br />
server {<br />
listen 443;<br />
server_name docker.example.com;<br />
<br />
# SSL<br />
ssl on;<br />
ssl_certificate /etc/nginx/certs/docker.example.com.crt;<br />
ssl_certificate_key /etc/nginx/certs/docker.example.com.key;<br />
<br />
# disable any limits to avoid HTTP 413 for large image uploads<br />
client_max_body_size 0;<br />
<br />
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)<br />
chunked_transfer_encoding on;<br />
<br />
location /v2/ {<br />
# Do not allow connections from docker 1.5 and earlier<br />
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents<br />
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {<br />
return 404;<br />
}<br />
<br />
proxy_pass http://docker-registry;<br />
proxy_set_header Host $http_host; # required for docker client's sake<br />
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP<br />
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;<br />
proxy_set_header X-Forwarded-Proto $scheme;<br />
proxy_read_timeout 900;<br />
<br />
add_header 'Docker-Distribution-Api-Version:' 'registry/2.0' always;<br />
<br />
# To add basic authentication to v2 use auth_basic setting plus add_header<br />
auth_basic "Restricted access to Docker Registry";<br />
auth_basic_user_file /etc/nginx/conf.d/registry.htpasswd;<br />
}<br />
}<br />
</pre><br />
<br />
$ cd /opt/docker-registry/nginx/conf.d<br />
$ htpasswd -c registry.htpasswd <username> # replace <username> with your actual username<br />
$ htpasswd registry.htpasswd <username2> # [optional] add a 2nd user<br />
<br />
* Setup your own certificate signing authority (for use with SSL):<br />
<br />
$ cd /opt/docker-registry/nginx/certs<br />
<br />
* Generate a new root key:<br />
<br />
$ openssl genrsa -out docker-registry-CA.key 2048<br />
<br />
* Generate a root certificate (enter anything you like at the prompts):<br />
<br />
$ openssl req -x509 -new -nodes -key docker-registry-CA.key -days 3650 -out docker-registry-CA.crt<br />
<br />
Then generate a key for your server (this is the file referenced by <code>ssl_certificate_key</code> in the Nginx configuration above):<br />
<br />
$ openssl genrsa -out docker.example.com.key 2048<br />
<br />
Now we have to make a certificate signing request (CSR). After you type the following command, OpenSSL will prompt you to answer a few questions. Enter anything you like for the first few, however, when OpenSSL prompts you to enter the "Common Name", make sure to enter the domain or IP of your server.<br />
<br />
$ openssl req -new -key docker.example.com.key -out docker.example.com.csr<br />
<br />
* Sign the certificate request:<br />
<br />
$ openssl x509 -req -in docker.example.com.csr -CA docker-registry-CA.crt -CAkey docker-registry-CA.key -CAcreateserial -out docker.example.com.crt -days 3650<br />
<br />
* Force any clients that will use the certificate authority we created above to accept that it is a "legitimate" certificate. Run the following commands on the Docker registry server and on any hosts that will be communicating with the Docker registry server:<br />
<br />
$ sudo cp /opt/docker-registry/nginx/certs/docker-registry-CA.crt /usr/local/share/ca-certificates/<br />
$ sudo update-ca-trust<br />
<br />
* Restart the Docker daemon in order for it to pick up the changes to the certificate store:<br />
<br />
$ sudo systemctl restart docker.service<br />
<br />
* Bring up the associated Docker containers:<br />
$ docker-compose up -d<br />
<br />
* Your Docker registry directory structure should look like the following:<br />
<pre><br />
$ cd /opt/docker-registry && tree .<br />
.<br />
├── data<br />
├── docker-compose.yml<br />
├── log<br />
│ └── nginx<br />
│ ├── access.log<br />
│ └── error.log<br />
└── nginx<br />
├── certs<br />
│ ├── docker-registry-CA.crt<br />
│ ├── docker-registry-CA.key<br />
│ ├── docker-registry-CA.srl<br />
│ ├── docker.example.com.crt<br />
│ ├── docker.example.com.csr<br />
│ └── docker.example.com.key<br />
└── conf.d<br />
├── registry.conf<br />
└── registry.htpasswd<br />
</pre><br />
<br />
* To access the private Docker registry from a client machine (any machine, really), first add the SSL certificate you created earlier to the client machine:<br />
<br />
$ cat /opt/docker-registry/nginx/certs/docker-registry-CA.crt # copy contents<br />
# On client machine:<br />
$ sudo vim /usr/local/share/ca-certificates/docker-registry-CA.crt # paste contents<br />
$ sudo update-ca-certificates # You should see "1 added" in the output<br />
<br />
* Restart Docker on the client machine to make sure it reloads the system's CA certificates:<br />
<br />
$ sudo service docker restart<br />
<br />
* Test that you can reach your private Docker registry:<br />
$ curl -k <nowiki>https://USERNAME:PASSWORD@docker.example.com:5043/v2/</nowiki><br />
{} # <- proper output<br />
<br />
* Now, test that you can login with Docker:<br />
$ docker login <nowiki>https://docker.example.com:5043</nowiki><br />
<br />
If that returns with "Login Succeeded", your private Docker registry is up and running!<br />
<br />
'''This section is incomplete. It will be updated when I have time.'''<br />
<br />
==Docker environment variables==<br />
''Note: See [https://docs.docker.com/engine/reference/commandline/cli/#environment-variables here] for the most up-to-date list of environment variables.''<br />
<br />
The following list of environment variables are supported by the docker command line:<br />
<br />
;<code>DOCKER_API_VERSION</code> : The API version to use (e.g., 1.19)<br />
;<code>DOCKER_CONFIG</code> : The location of your client configuration files.<br />
;<code>DOCKER_CERT_PATH</code> : The location of your authentication keys.<br />
;<code>DOCKER_DRIVER</code> : The graph driver to use.<br />
;<code>DOCKER_HOST</code> : Daemon socket to connect to.<br />
;<code>DOCKER_NOWARN_KERNEL_VERSION</code> : Prevent warnings that your Linux kernel is unsuitable for Docker.<br />
;<code>DOCKER_RAMDISK</code> : If set this will disable "pivot_root".<br />
;<code>DOCKER_TLS_VERIFY</code> : When set Docker uses TLS and verifies the remote.<br />
;<code>DOCKER_CONTENT_TRUST</code> : When set Docker uses notary to sign and verify images. Equates to <code>--disable-content-trust=false</code> for build, create, pull, push, run.<br />
;<code>DOCKER_CONTENT_TRUST_SERVER</code> : The URL of the Notary server to use. This defaults to the same URL as the registry.<br />
;<code>DOCKER_TMPDIR</code> : Location for temporary Docker files.<br />
<br />
Because Docker is developed using "Go", one can also use any environment variables used by the "Go" runtime. In particular, the following might be useful:<br />
<br />
;<code>HTTP_PROXY</code><br />
;<code>HTTPS_PROXY</code><br />
;<code>NO_PROXY</code><br />
<br />
* Example usage:<br />
$ export DOCKER_API_VERSION=1.19<br />
<br />
==See also==<br />
* [[containerd]]<br />
<br />
==References==<br />
<references/><br />
<br />
==External links==<br />
* [https://www.docker.com/ Official website]<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:DevOps]]<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Containerd&diff=8253Containerd2023-01-18T19:50:14Z<p>Christoph: Created page with "'''Containerd''' is an industry-standard core container runtime. It is currently available as a daemon for Linux and Windows, which can manage the complete container lifecycle..."</p>
<hr />
<div>'''Containerd''' is an industry-standard core container runtime. It is currently available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system. In 2015, Docker donated the OCI Specification to The Linux Foundation with a reference implementation called runc. Since 28 February 2019, it is an official CNCF project. Its general availability and intention to donate the project to CNCF was announced by Docker in 2017.<br />
<br />
==Troubleshooting containers==<br />
<br />
For debugging or troubleshooting on Linux nodes, you can interact with containerd using the portable command-line tool built for Kubernetes container runtimes: <code>crictl</code>. <code>crictl</code> supports common functionalities to view containers and images, read logs, and execute commands in the containers. Refer to the <code>crictl</code> [https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/ user guide] for the complete set of supported features and usage information.<br />
<br />
View container logs with [[systemd]]:<br />
<pre><br />
$ journalctl -u containerd<br />
</pre><br />
<br />
==See also==<br />
* [[Docker]]<br />
<br />
==External links==<br />
* [https://containerd.io/ Official website]<br />
* [https://github.com/kubernetes-sigs/cri-tools/releases Download latest release(s) of cri-tools]<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:Linux Command Line Tools]]<br />
[[Category:DevOps]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Curriculum_Vitae&diff=8252Curriculum Vitae2023-01-14T19:43:54Z<p>Christoph: </p>
<hr />
<div><center><br />
==[[Christoph Champ]]==<br />
'''BSc Biochemistry and Biophysics'''<br /><br />
'''Google Cloud Certified – Professional Cloud Architect'''<br/><br />
'''AWS Certified DevOps Engineer – Professional'''<br/><br />
'''AWS Certified Solutions Architect – Associate'''<br/><br />
'''AWS Certified Developer – Associate'''<br/><br />
'''SUSE Certified Administrator (SCA) in Rancher 2.5'''<br/><br />
'''SUSE Certified Administrator (SCA) in SUSE Rancher 2.6'''<br/><br />
'''Rancher Certified Consultant'''<br/><br />
'''Terraform: Certified HashiCorp Implementation Partner (CHIP)'''<br/><br />
'''HashiCorp Certified Terraform Associate'''<br/><br />
'''Red Hat Certified System Administrator'''<br />
</center><br />
__NOTOC__<br />
==Profile==<br />
'''DevOps Architect''', Cloud Solution Architect, [[Cloud|Cloud Engineer]], [[DevOps|DevOps Engineer]], [[:Category:Linux Command Line Tools|Linux Systems Administrator]], and [[:Category:Academia|scientific programmer]]/computational biologist with strong technical and analytical skills. Experienced in the areas of system administration, automation, hardware, databases, backup, web design ([[LAMP]] developer), programming, and scientific research. Interested in [[Cloud|Cloud computing]], [[Big Data]], artificial intelligence/[[:Category:Machine Learning|machine learning]], data mining, pattern extraction, bioinformatics, and open-source software development. Note: I do ''not'' work in or have any experience in a Microsoft Windows environment ('''I am 100% a Linux and open-source guy {I have been using Linux since 1999}''').<br />
<br />
Current location: Seattle, Washington, USA.<br />
<br />
==Education==<br />
* B.S. Biochemistry and Biophysics, Oregon State University; September 2000 &ndash; June 2004<br />
* Scientific research apprentice: September 1998 &ndash; August 2000<br />
* Graduate studies and research: Massachusetts Institute of Technology (biological sciences), University of Pittsburgh (biophysics), Carnegie Mellon University (biophysics), Danmarks Tekniske Universitet (comparative genomics), København Universitet (bioinformatics), and University of Washington (x-ray crystallography and genome sciences); July 2004 &ndash; November 2012<br />
* Red Hat Certified System Administrator (RHEL 6, [http://christophchamp.com/doc/pdf/Christoph_Champ_RHCSA.pdf Certificate #140-074-097]); May 2014<br />
* AWS Certified Developer – Associate (Certificate #AWS-ADEV-3367); May 2016<br />
* AWS Certified Solutions Architect – Associate (Certificate #AWS-ASA-35064); April 2017<br />
* Google Cloud Certified – Professional Cloud Architect (Certification ID: VO1HIA); December 2019<br />
* Rancher Certified Consultant; December 2019<br />
* HashiCorp Certified Terraform Associate; April 2020<br />
* AWS Certified DevOps Engineer – Professional; February 2021<br />
* Terraform: Certified HashiCorp Implementation Partner (CHIP); April 2021<br />
* SUSE Certified Administrator (SCA) in Rancher 2.5; September 2021<br />
* SUSE Certified Administrator (SCA) in SUSE Rancher 2.6; January 2022<br />
<br />
==Professional Experience==<br />
*'''March 2022 &ndash; present''': DevOps Architect at '''Redapt''', Seattle, USA.<br />
::Skills used: Linux, [[:Category:AWS|AWS]], [[Google Cloud Platform|GCP]], Azure, [[Docker]], [[Kubernetes]], Anthos (+ACM, ASM), [[Rancher]], [[Helm]] (v2/v3), [[Istio]], Kiali, [https://www.youtube.com/watch?v=esyX35J2mq8 Prometheus], [https://www.youtube.com/watch?v=esyX35J2mq8 Grafana], [[etcd]], [[Ansible]], [[Terraform]] (+Enterprise/+Sentinel), ArgoCD, [[LVM]], [[MySQL]]/MariaDB, [[Sensu]], Jenkins, [[Pulumi]], [[Vault]], [[Packer]], [[Vagrant]], [[Apache]], [[Nginx]], [[Redis]], [[Bash]], [[Python]], [[sed]], [[awk]], [[git]], GitHub (+[[GitHub Actions|Actions]]), [[GitLab]], openVPN, Slack API, [[TensorFlow]]<br />
*'''March 2020 &ndash; March 2022''': Cloud Solution Architect at '''Redapt''', Seattle, USA.<br />
::Skills used: Linux, [[:Category:AWS|AWS]], [[Google Cloud Platform|GCP]], Azure, [[Docker]], [[Kubernetes]], Anthos (+ACM, ASM), [[Rancher]], [[Helm]] (v2/v3), [[Istio]], Kiali, [https://www.youtube.com/watch?v=esyX35J2mq8 Prometheus], [https://www.youtube.com/watch?v=esyX35J2mq8 Grafana], [[etcd]], [[Ansible]], [[Terraform]] (+Enterprise/+Sentinel), [[LVM]], [[MySQL]]/MariaDB, [[Sensu]], Jenkins, [[Pulumi]], [[Vault]], [[Packer]], [[Vagrant]], [[Apache]], [[Nginx]], [[Redis]], [[Bash]], [[Python]], [[sed]], [[awk]], [[git]], GitHub (+[[GitHub Actions|Actions]]), [[GitLab]], openVPN, Slack API, [[TensorFlow]]<br />
*'''July 2018 &ndash; March 2020''': Senior Cloud Engineer / Senior DevOps Engineer at '''Redapt''', Seattle, USA.<br />
::Skills used: Linux, [[:Category:AWS|AWS]], [[Google Cloud Platform|GCP]], [[Docker]], [[Kubernetes]], Anthos (+ACM), [[Rancher]], [[Helm]], [[Istio]], [[Prometheus]], [[Grafana]], [[OpenShift]], [[etcd]], [[Ansible]], [[Terraform]], CoreOS, [[LVM]], [[MySQL]], [[Sensu]], Jenkins, [[Vault]], [[Packer]], [[Vagrant]], [[Apache]], [[Nginx]], [[Redis]], [[Bash]], [[Python]], [[sed]], [[awk]], [[git]], GitHub, [[GitLab]], Slack API, [[TensorFlow]]<br />
*'''September 2015 &ndash; July 2018''': Cloud Engineer / DevOps Engineer at '''Redapt''', Seattle, USA.<br />
::Skills used: Linux, [[:Category:AWS|AWS]], [[Docker]], [[Kubernetes]], [[Rancher]], [[Helm]], [[etcd]], [[Ansible]], [[Terraform]], CoreOS, [[LVM]], [[MySQL]], [[Sensu]], Jenkins, [https://www.go.cd/ GoCD], [[Vagrant]], [[Apache]], [[Nginx]], [[Redis]], [[Bash]], [[Python]], [[sed]], [[awk]], [[git]], GitHub, [[GitLab]], Slack API, [[:Category:OpenStack|OpenStack]]<br />
*'''February 2015 &ndash; August 2015''': Linux Systems Administrator at Trusted Builders, Seattle, USA.<br />
::Migrated computing infrastructure to the Cloud.<br />
::Skills used: Linux, Ansible, Vagrant, Rackspace Cloud, Nginx, Python<br />
*'''May 2013 &ndash; January 2015''': Linux Administrator II at '''[[:Category:Rackspace|Rackspace]]''', USA.<br />
::Provided Cloud Support (servers, storage, databases, load balancers, etc.), administered Linux servers, Xen, [[:Category:XenServer|XenServer]], [[LVM]], [[:Category:OpenStack|OpenStack]], [[Rackspace API|RESTful API]], networking, site reliability, maintenance, LAMP-stacks, etc. Also helped develop tools and internal websites to help automate tasks (tools written in PHP, Python, and Django).<br />
::Skills used: Linux, Apache, MySQL, PHP, Python, Rackspace Cloud, Xen/XenServer, KVM<br />
*'''December 2012 &ndash; April 2013''': Cloud and web developer at MadLab, Seattle, USA.<br />
::Built a RESTful web framework and API using Amazon Web Services' NoSQL DynamoDB and a custom-built/in-house MVC in PHP.<br />
*'''August 2012 &ndash; November 2012''': Research Assistant / Scientific Programmer at the '''[[Dr. Elhanan Borenstein Laboratory]]''' &mdash; Department of Genome Sciences, University of Washington, Seattle, USA.<br />
::Conducted scientific research on the collection and organization of metagenomic datasets and the assembly of an analysis pipeline for metagenomic data; and<br />
::Analysis of the gut microbiome of children with cystic fibrosis (CF) and on methods to identify enriched functions.<br />
::Note: Everything was done in Linux and most of the programming was done in Python (+[http://pandas.pydata.org/ Pandas]).<br />
*'''October 2010 &ndash; July 2012''': Consultant (Cloud and web developer / programmer) &mdash; Greater Seattle Area, USA.<br />
::Built a customer relationship management (CRM) website using Django (+Python, MySQL) with extended web services, command line interface, mobile apps, etc.<br />
::Note: Everything was done in Linux and/or Amazon Web Services (AWS), and most of the programming was done in Python.<br />
*'''October 2006 &ndash; October 2010''': Linux Administrator, Research Assistant, and software developer/maintainer at the '''[[Dr. Ethan A. Merritt Laboratory]]''' &mdash; Medical Structural Genomics of Pathogenic Protozoa Consortium, University of Washington, Seattle, USA.<br />
::Developed and maintained the Python Macromolecular Library (pymmlib) package and developed and maintained the TLS Motion Determination (TLSMD) webserver/webservices. Contributed code to the CCP4 suite. Performed detailed analysis of thousands of crystallographic structures (both in-house and from the PDB). Programmed in C, Fortran, Python, Perl, Javascript, Bash, etc., maintained multiple MySQL databases, and worked in a LAMP environment for web services. Research produced a paper published in a peer-reviewed journal (see below). Also was the Linux System Administrator for all computers in our lab, performed backups (tape, DVD, xHDDs, etc.), hardware and software support for all personnel, etc.<br />
::Note: Everything was done in Linux (and some Mac OS X) and most of the programming was done in Python.<br />
*'''March 2006 &ndash; February 2007''': Linux Administrator and Research Assistant at the '''[[Dr. Carlos J. Camacho Laboratory]]''' &mdash; Center for Computational Biology and Bioinformatics, University of Pittsburgh, USA; (''in absentia'').<br />
::Developed an algorithm with a web services front-end to provide a quick estimate for protein-protein interactions and associated energies. Programmed in C, Fortran77, PHP and done in a LAMP environment. Research produced a paper published in a peer-reviewed journal (see below).<br />
::Note: Everything was done in Linux.<br />
*'''December 2005 &ndash; February 2006''': Consultant (website developer) &mdash; Beausoleil, France.<br />
::Developed a website for a local French restaurant whilst waiting for my professor to move his lab.<br />
::Note: Everything was done in Linux and most of the programming was done in PHP.<br />
*'''August 2005 &ndash; November 2005''': Research Assistant, Teaching Assistant, and programmer at the '''[[Dr. David W. Ussery Laboratory]]''' &mdash; Center for Biological Sequence Analysis, Danmarks Tekniske Universitet, Denmark.<br />
::Helped develop and maintain algorithms and web services for a Comparative Genomics department. Programmed in C, Perl, Python, Java, Bash, etc., maintained multiple MySQL databases, and worked in a LAMP environment. Also was a Teaching Assistant for a Comparative Genomics class. Research produced a paper publisher in a peer-reviewed journal (see below).<br />
::Note: Everything was done in Linux.<br />
*'''October 2004 &ndash; July 2005''': Research Assistant and System Administrator at the '''[[Dr. Carlos J. Camacho Laboratory]]''' &mdash; Center for Computational Biology and Bioinformatics, University of Pittsburgh, USA.<br />
::Modelled protein-protein and protein-DNA interactions from crystallographic data obtained via the Protein Data Bank. Developed and maintained 4 websites to provide access to the algorithms we developed as web services (all done in a LAMP environment). Programmed in C, Fortran77, Perl, PHP, and maintained multiple MySQL databases. Research produced 2 papers published in peer-reviewed journals (see below).<br />
::Note: Everything was done in Linux.<br />
*'''Summer 2002 / December 2002''': Research Assistant at the '''[[Dr. Alex Rich Laboratory]]''' &mdash; Department of Biology, Massachusetts Institute of Technology, USA.<br />
::Did research on the function of Z-DNA in various genomes and developed algorithms in C to search through genomes and compare the results against DNA microarray data. Also wrote Perl scripts and maintained a MySQL database.<br />
::Note: Everything was done in Linux.<br />
*'''May 2000 &ndash; September 2004''': Research Assistant and software developer at the '''[[Dr. P. Shing Ho Laboratory]]''' &mdash; Department of Biochemistry &amp; Biophysics, Oregon State University, USA.<br />
::Did research on the function of Z-DNA in various genomes (resulted in a paper published in a peer-reviewed journal; see CV). Other duties consisted of being an assistant scientific programmer in C and Perl, as well as maintaining a MySQL database.<br />
::Note: Everything was done in Linux.<br />
*'''September 1998 &ndash; April 2000''': Lab Assistant for Kevin Krefft microbiology laboratory, Albany, OR, USA<br />
::Prepared all kinds of media (e.g., agar solutions and agar plates) to feed our stock of microbes, as well as rotating the colonies.<br />
*'''November 1996 &ndash; December 1997''': English Language Instructor at Berlitz, Ljubljana, Slovenia.<br />
::I taught at all levels (beginner, mid-level, and advanced). Most of my students were government officials, businesspersons, and other professionals.<br />
*'''July 1996 &ndash; October 1996''': Volunteer Humanitarian Aid Worker, Croatia.<br />
::I drove a van full of food and medical supplies to refugee camps on a nearly daily basis.<br />
*'''September 1995 &ndash; June 1996''': Audio Technician at Audio & Visual Production Centre, Tateyama, Japan.<br />
::My main role was to oversee a group of audio technicians, as well as performing automated dialogue replacement (ADR; aka "dubbing") for music videos, documentaries, etc.<br />
*'''September 1994 &ndash; August 1995''': Deputy Manager at a Pan-European Translation and Publishing House in both Vienna, Austria and Budapest, Hungary.<br />
::My main role there was as deputy manager of the audio and visual department. We would receive recordings of an original English audio or visual media translated into just about every language in Europe and then make thousands of copies of them (cassette tapes, CDs, DAT, VHS-PAL, VHS-NTSC, etc.) and then ship them all over Europe.<br />
<br />
:''See [[work experience]] for details''.<br />
<br />
==Publications==<br />
NOTE: My publications have been cited by over 200 scientifically peer-reviewed papers.<br />
{{publications}}<br />
<br />
:''See [http://scholar.google.com/citations?user=N57FEU8AAAAJ my Google Scholar profile] for more information.''<br />
:''See [http://www.ncbi.nlm.nih.gov/sites/myncbi/collections/public/1lGM831m_BPDP3noBMH2woUkP/?sort=date&direction=ascending my list of publications on NCBI].''<br />
<br />
==Portfolio==<br />
===Web servers===<br />
*'''[[TLSMD|TLS Motion Determination]] (TLSMD) / Python Macromolecular Library (mmLib)''': analyses a protein crystal structure for evidence of flexibility.<br />
:*Server administrator and developer; October 2006 &ndash; October 2010.<br />
:*The Python Macromolecular Library (mmLib) is a software toolkit and library of routines for the analysis and manipulation of macromolecular structural models, implemented in the Python programming language. It is accessed via a layered, object-oriented application programming interface, and provides a range of useful software components for parsing mmCIF, and PDB files, a library of atomic elements and monomers, an object-oriented data structure describing biological macromolecules, and an OpenGL molecular viewer. The mmLib data model is designed to provide easy access to the various levels of detail needed to implement high-level application programs for macromolecular crystallography, NMR, modelling, and visualization. This includes specialized classes for proteins, DNA, amino acids, and nucleic acids. Also included are an extensive monomer library, element library, and specialized classes for performing unit cell calculations combined with a full space group library.<br />
:*Contributed code (C, Fortran, and Python) for the <code>[http://www.ccp4.ac.uk/html/tlsanl.html tlsanl]</code> program (support for "SKTTLS"/[[Skittles]]) to the [http://www.ccp4.ac.uk/ Collaborative Computational Project No. 4] (Software for Macromolecular X-Ray Crystallography).<br />
*'''[[Raster3D]]''': a set of tools for generating high-quality raster images of proteins or other molecules.<br />
:: Starting with version 2.7s, created a port to Mac OS X as well as compile binaries using the Intel Fortran Compiler. Also added png/jpeg (and labels) output using libgd. Added a new feature in <code>rastep</code> to support [[Skittles]] validation (introduced in 2.9); February 2009 &ndash; July 2010.<br />
* '''[[FastContact|FastContact Server]]''': a free energy scoring tool for protein-protein complex structures.<br />
:: Version 2.0: Programmer, Server architect, and administrator; January 2007 &ndash; June 2007.<br />
:: Version 1.0: Programmer, Server architect, and administrator; July 2005 &ndash; December 2006.<br />
* '''[[SmoothDock|SmoothDock Server]]''' (under development): a fully automated algorithm for finding physical interactions between proteins.<br />
:: Programmer, Server architect, and administrator; January 2005 &ndash; October 2010 (note: This server uses code optimised and run in parallel on 256 processors).<br />
* '''[http://www.cbs.dtu.dk/services/FD/ CBS Fungal Database]''': DNA structural atlases for complete chromosomes and genomes.<br />
:: Programmer, Server architect, and administrator; August 2005 &ndash; December 2005.<br />
* '''[http://structure.pitt.edu/servers/looseloops/ LooseLoops Server]''' (under development and construction): predicts regions of a protein (PDB format) having highly flexible loops<br />
:: Programmer, Server architect, and administrator; November 2004 &ndash; June 2006.<br />
* '''[http://structure.pitt.edu/servers/domainsplit/ DomainSplit Server]''': predicts the number of domains in a given protein (PDB format)<br />
:: Server architect and administrator; September 2004 &ndash; June 2006.<br />
* '''[[Z-Hunt|ZHunt Online Server]]''': predicts the locations of probable Z-DNA forming regions in a given DNA sequence<br />
:: Primary researcher and programmer; May 2000 &ndash; September 2004 (note: front end by Sandor Maurice; back end by Sandor Maurice and P. Christoph Champ)<br />
* '''[http://www.able2know.com/ Able2Know.com]''':<br />
:: An administrator and developer; January 2005 &ndash; January 2013.<br />
<br />
===High-level programming (2000-2004)===<br />
Probably the most complicated programming project I have worked on was one where we were attempting to predict how two proteins will interact. Since billions of calculations (a minimum of 2.7 x 10<sup>10</sup>) are needed for each protein/protein complex, I had to write specific (C) code that was optimised to run in parallel on a dedicated cluster of 256 CPUs (using the MPICH compiler). As a side note, I had to translate some original Fortran77 code into C so it could be compiled with MPICC.<br />
<br />
We wanted to make our algorithm available to the general scientific community and, so, we decided that a web server would be the best implementation. What we needed was a simple, user-friendly interface to the back-end algorithm. Getting the user input (here the coordinates for each atom in a protein) to be transferred to the cluster required a great deal of pre-processing (data parsing, formatting, and error-checking). Likewise, the results returned by the cluster required post-processing to be eventually sent (via email) to the user.<br />
<br />
The entire system had to run autonomously (controlled via [[Crontab (command)|crontab]] scheduling). As the administrator of this setup, I was responsible for keeping the system up at all times. However, since there were thousands of lines of code, if the system should crash it would be difficult to find out where the problem was if I did not maintain extensive log files. I set these up to be easily parsable and had the system periodically email me the "health" of the system. This server remains up-and-running to this day.<br />
<br />
===Data mining / data parsing / data manipulation (2000-2004)===<br />
The algorithm described above is implemented through a combination of Fortran77 and C code. However, the initial data (input) and results (output) are sent through multiple pipes as a series of I/O streams using [[Perl]], [[Awk|awk/gawk]], [[sed]], and [[bash]] scripts. They are all controlled via [[make|makefiles]] and use extensive [[regular expression]]s. (see: [[SmoothDock]] for details.)<br />
<br />
I would say that working with the [[:Category:Linux Command Line Tools|command line interface]] (CLI) and [[:Category:Scripting languages|scripting languages]] are my main skills and strengths. These skills have been developed through over seven years of active data mining through literally hundreds of terabytes of data in a wide array of formats and from multiple sources.<br />
<br />
===Database experience (2000-2004)===<br />
A different project I worked on also required extensive data mining. However, here we were more interested in storing our data in easily manageable ([[MySQL]]) databases. On this particular project, we were working with the human genome. Each genome has over three billion "letters" (or bits of information). However, after adding annotation we were dealing with tens of gigabytes of data for each genome and for each experiment. After completing a couple of these experiments, we were quickly dealing with nearly 100 gigabytes of data. This required extensive data normalization and carefully constructed databases. (see: [[ZHunt]] for details.)<br />
<br />
==Technical and Specialized Skills==<br />
see: [[Technical and Specialized Skills]] for a detailed listing.<br />
* '''Computer operating systems''':<br />
:: Primary OS: [[Linux]] ('''[[CentOS]]''', CoreOS, [[SuSE]], [[Mandriva Linux|Mandriva]], RedHat/Fedora, Slackware, and Ubuntu); Secondary OS: macOS and Unix. Note: I do ''not'' work in or have any experience in a Microsoft Windows environment.<br />
* '''Computer languages and scripts''':<br />
:: [[Python]] (+[https://www.djangoproject.com/ Django], [http://pandas.pydata.org/ Pandas]), [[PHP]] (+[http://twig.sensiolabs.org/ Twig], [http://www.smarty.net/ Smarty]), [[Perl]] (+CGI), C (+MPI/MPICH), Fortran 77, BASIC, [[XHTML|X/HTML]] (+[[Cascading Style Sheets|CSS]]), XML, JSON, SQL, RESTful API (+[[Curl|cURL]]), various shell scripting ([[Awk|awk/gawk]], [[sed]], [[bash]], man, [[make|make/gmake]], etc.), [[Regular expression|regular expressions]], [[R programming language|R]], [[LaTeX]], PostScript and PDF generation.<br />
* '''Cloud computing and virtual environments''':<br />
:: [[:Category:AWS|AWS]], [[Google Cloud Platform|GCP]], [[Docker]], [[Kubernetes]], Anthos (+ACM, ASM), [[Rancher]], [[Helm]] (v2/v3), [[Istio]], [[Prometheus]], [[Grafana]], [[OpenShift]], [[etcd]], [[Ansible]], [[Terraform]] (+TFE/+Sentinel), [[Pulumi]], CoreOS, [[LVM]], [[MySQL]], [[Sensu]], Jenkins, [[Vault]], [[Vagrant]], [[Apache]], [[Nginx]], [[Redis]], [[Bash]], [[Python]], [[sed]], [[awk]], Slack API, [[:Category:OpenStack|OpenStack]], [[:Category:XenServer|XenServer]], [[:Category:Rackspace|Rackspace]], [https://www.chef.io/ Chef], [https://www.nagios.org/ Nagios]<br />
* '''Machine learning''':<br />
:: [[TensorFlow]], [[Apache Spark]], [[Python/matplotlib|matplotlib]], [[Python/NumPy|NumPy]], [[Python/SciPy|SciPy]], [[Python/pandas|Pandas]]<br />
* '''Web Applications''':<br />
:: [[LAMP]], [https://www.djangoproject.com/ Django], [http://www.mediawiki.org/ WikiMedia], [http://forum.christophchamp.com/ phpBB] (+MODs, SEOs, and Fetch All), [http://blog.christophchamp.com/ WordPress], [http://drupal.christophchamp.com/ Drupal], [http://twig.sensiolabs.org/ Twig], [http://www.smarty.net/ Smarty], [http://getbootstrap.com/ Bootstrap], [http://www.openx.com/ OpenX] (phpAdsNew), etc.<br />
* '''Computer System Administration and Security''':<br />
:: [[Logical Volume Manager|LVM]], [[SELinux]], [[systemd]], [[iptables]], [[Apache|Apache HTTP Server]], [[Nginx]], Email Server (e.g. [[Postfix]]), [[SSH]]/OpenSSH, FTP (e.g., [[vsftpd]]), PGP/GPG, etc.<br />
* '''Databases''' (SQL, NoSQL, key-value store):<br />
:: [[MySQL]]/MariaDB, Aurora, SQLite, DynamoDB, [[redis]], [[etcd]], [[BLAST]] (standalone and webserver).<br />
* '''Revision control software''':<br />
:: [[Git]] (+GitHub {[[GitHub Actions]]} / +GitLab), [[Subversion|Apache Subversion]] (SVN)<br />
* '''Software (Unix/Linux)''':<br />
:: [[Clustal]], [[CHARMm]], [[:Category:Ccp4|CCP4]], [[:Category:EMBOSS|EMBOSS]], [[TLSMD]]/[[Python Macromolecular Library]] (mmLib), [[DOT]], [[MrBayes]], [[GnuPlot]] (+C API), Grace, [[RasMol]] (+scripting), [[PyMOL]] (+scripting), matplotlib, BioPython, [[GIMP]] (+various image generating techniques), OpenOffice/LibreOffice, [[vi]], etc.<br />
* '''Software (PC/Mac)''':<br />
:: Adobe Photoshop, Adobe Illustrator, Adobe PageMaker, Microsoft Office (Word, Excel, PowerPoint), Mathematica, Maple, etc.<br />
* '''[[:Category:Hobbies|Related Hobbies]]''':<br />
:: IoT, Ardunio, [[Raspberry Pi]], ESP8266/ESP32<br />
* '''[[:Category:Linguistics|Languages Spoken]]''':<br />
:: English (fluent), German (college level), and Spanish (college level).<br />
<br />
==Honors, Awards, Memberships, Presentations, and Conferences Attended==<br />
* '''DevOps Workshop''', Seattle, Washington<br />
:: Was lead lecturer for the workshop on Docker, Kubernetes, Rancher, and DevOps, January 2023.<br />
* '''[https://rancher.com/rodeos/ Rancher Rodeo]''', Seattle, Washington<br />
:: Attended conference and presented "5 Challenges of Adopting Kubernetes", 22 October 2019<br />
* '''Rancher Meetup''' (Online)<br />
:: Presented "[https://www.youtube.com/watch?v=esyX35J2mq8 Applying Site Reliability Engineering 'Golden Signals' to your Kubernetes Cluster]", 26 June 2019.<br />
* '''Microsoft DevOps Workshop''', Charlotte, North Carolina<br />
:: Was lead lecturer for the workshop on Docker, Kubernetes, and DevOps, 24 October 2018.<br />
* '''Microsoft DevOps Workshop''', Irvine, California<br />
:: Was lead lecturer for the workshop on Docker, Kubernetes, and DevOps, 9 October 2018.<br />
* '''Microsoft DevOps Workshop''', Chicago, Illinois<br />
:: Was lead lecturer for the workshop on Docker, Kubernetes, and DevOps, 4 October 2018.<br />
* '''Microsoft DevOps Workshop''', Sunnyvale, California<br />
:: Was lead lecturer for the workshop on Docker, Kubernetes, and DevOps, 20 September 2018.<br />
* '''Microsoft DevOps Workshop''', Bellevue, Washington<br />
:: Was lead lecturer for the workshop on Docker, Kubernetes, and DevOps, 10 September 2018.<br />
* '''American Crystallographic Association (ACA) Meeting''', Buffalo, New York.<br />
:: Poster presented ("Identification of Functional Motifs and Binding Site Properties in Potential Drug Targets from Tropical Parasites" &mdash; Arakaki TL, Le Trong I, Larson ET, '''[[Christoph Champ|Champ PC]]''', Neely H, Boni E, Mueller N, Napuli A, Kelley A, Krumm1 BE, Xiao1 L, Shibata S, Zhang Z, Deng W, Zucker F, Fan E, Buckner FS, van Voorhis WCE, Verlinde CLMJ, Hol WGJ, Merritt1 EA. ''[http://www.msgpp.org Medical Structural Genomics of Pathogenic Protozoa Consortium]''.) ([[Abstracts|Click here for abstract]]).<br />
* '''8th Annual Functional Genomics: Quantitative Biology Conference''', Runan, Chalmers, Göteborg, Sweden, 29 August 2005. <br />
:: Attended conference and presented poster ("[[FastContact]]: a free energy scoring tool for protein-protein complex structures" &mdash; P. Christoph Champ, Hui Ma, and Carlos J. Camacho). <br />
* '''[[Critical Assessment of PRotein Interactions|CAPRI: Critical Assessment of PRediction of Interactions]] — Round 7''', Heidelberg, Germany.<br />
:: Participant in Round 7 for CAPRI community-wide experiment on the comparative evaluation of protein-protein docking for structure prediction (Hosted By EMBL/EBI-MSD Group), May 2005. <br />
* '''[[Critical Assessment of PRotein Interactions|CAPRI: Critical Assessment of PRediction of Interactions]] — Round 6''', Heidelberg, Germany.<br />
:: Participant in Round 6 for CAPRI community-wide experiment on the comparative evaluation of protein-protein docking for structure prediction (Hosted By EMBL/EBI-MSD Group), January 2005. <br />
* '''12th Conversation of Biomolecular Structure and Dynamics''', Albany, New York, 19-23 June 2001.<br />
:: Attended conference and presented poster ("Mapping Z-DNA in Human Chromosome 22" &mdash; P. Christoph Champ, Jeffery M. Vargason, Tracy Camp, and P. Shing Ho). <br />
* '''Howard Hughes Medical Institute''' &mdash; Corvallis, Oregon (presented research in 2000 and 2001) <br />
:: 2000 HHMI Summer Undergraduate Research Program at OSU. ([[Abstracts|Click here for abstract]]).<br />
:: 2001 HHMI Summer Undergraduate Research Program at OSU, 29-30 August 2001. ([[Abstracts|Click here for abstract]]).<br />
* '''Oregon State University Honor's College'''<br />
* '''The National Dean's List''' (only 0.5% of US college students receive this award) <br />
* '''Phi Theta Kappa Honor Society'''<br />
<br />
==Extracurricular and Leadership Activities==<br />
* [[:Category:World Travels|World Travels]] &mdash; {{countries}} Countries to-date.<br />
* First aid training (with certificate): 1995, 1999.<br />
* Radioactive handling and safety training (with certificate): 2003.<br />
* Volunteer at Biochemistry Workshop for High School Students &mdash; Oregon State University, Summer 2000.<br />
* Visual and Audio Production Training &mdash; Japan, October 1995—June 1996.<br />
* [[Volunteer Humanitarian Aid Work]] &mdash; Croatia (Zagreb), Summer 1996.<br />
* Volunteer Humanitarian Aid Work &mdash; Hungary (Budapest, Szeged), September 1994—September 1995.<br />
* Volunteer Humanitarian Aid Work &mdash; Russia (Moscow), July 1994—September 1994.<br />
* Volunteer Humanitarian Aid Work &mdash; Byelorussia/Belarus (Minsk), June 1994.<br />
* Volunteer Humanitarian Aid Work &mdash; Lithuania (Vilnius) and Latvia (Riga), January 1994.<br />
* Volunteer Humanitarian Aid Work &mdash; Poland (Warsaw, Skierniewice, Katowice, Kraków), December 1993—May 1994.<br />
* Volunteer Humanitarian Aid Work &mdash; Hungary (Budapest), September 1993—December 1993.<br />
* Volunteer Humanitarian Aid Work &mdash; Ecuador (Guayaquil, Cuenca, Quito), Summer 1993.<br />
<br />
==Academic Courses Taken (graduate level)==<br />
* '''Biocrystallography'''<br />
** Professors: Dr Wim Hol, Dr Ethan Merritt, Dr Jack Johnson, Dr Ning Zheng, Dr Ron Stenkamp, and Dr Wenqing Xu<br />
* '''Biological X-ray Structure Analysis'''<br />
** Professors: Dr Ron Stenkamp, Dr Wenqing Xu, and Dr Werner Kaminsky<br />
** Textbook: ''X-ray Structure Determination'', Stout and Jensen<br />
* '''[[Molecular Evolution (academic course)|Molecular Evolution]]'''<br />
** Professor: Dr Anders Gorm Pedersen <br />
** Textbook: ''[[Inferring Phylogenies]]'', Joseph Felsenstein, Sinauer Associates, Inc. (2003).<br />
* '''[[DNA Microarray Analysis (academic course)|DNA Microarray Analysis]]'''<br />
** Professor: Dr Henrik Bjørn Nielsn <br />
** Textbook: ''A Biologist's Guide to Analysis of DNA Microarray Data''.<br />
* '''Molecular Cell Biology'''<br />
** Professor: Dr Uffe Hasbro Mortensen and Dr Ivan Mijakovic <br />
** Textbook: ''Molecular Cell Biology'', Scott MP, Matsudaira P, Lodish H, Darnell J, Zipursky L, Kaiser CA, Berk A, and Krieger M. W. H. Freeman, 5th Edition (2003). <br />
* '''Advanced Bioinformatics'''<br />
** Professor: Dr Søren Brunak <br />
** Textbook: ''Guide to Analysis of DNA Microarray Data'', Knudsen S, 2nd Edition (2004). <br />
* '''Comparative Microbial Genomics: A Bioinformatics Approach'''<br />
** Professor: Dr David W. Ussery <br />
<br />
==Academic Courses Taken (relating to BSc degree)==<br />
<br />
* '''General Chemistry I, II, and III''' (+lab)<br />
** Professor: Dr Bridgid Backus <br />
** Textbook: ''General Chemistry'', Darrell D. Ebbing and Steven D. Gammon, Houghton Mifflin Company, Boston, 6th Edition (1999). <br />
* '''Organic Chemistry I, II, and III'''<br />
** Professor: [http://www.cityofhope.org/directory/people/horne-david/Pages/default.aspx Dr David Horne]<br />
** Textbook: ''Organic Chemistry'', Paula Yurkanis Bruice, Prentice-Hall, New Jersey, 3rd Edition (2001). <br />
* '''Experimental Chemistry I and II''' (+lab)<br />
** Professor: Dr Christine Pastorek and Dr John Loesor <br />
** Textbook: ''Principles and Techniques for an Integrated Chemistry Laboratory'', David A. Aikens, ''et. al.'', Waveland Press, Inc., Prospect Heights (1984). <br />
* '''Physical Chemistry I, II, and III'''<br />
** Professor: [http://www.chemistry.oregonstate.edu/evans.html Dr Glenn Evans]<br />
** Textbook: ''Physical Chemistry'', Peter Atkins and Julio de Paula, W.H. Freeman and Company, New York, 7th Edition (2002). <br />
*** Physical Chemistry I: Thermochemistry, Thermodynamics, Equilibrium<br />
*** Physical Chemistry II: Quantum Theory (Schrödinger equation), Atomic Structure, Spectroscopy (rotational and vibrational, electronic transitions, magnetic resonance)<br />
*** Physical Chemistry III: Statistical Thermodynamics, Molecular Interactions, Macromolecules and Aggregates, Kinetics and Molecular Reaction Dynamics)<br />
* '''[[Biochemistry (academic course)|Biochemistry I, II, and III]]''' (+lab) <br />
** Professor: Dr Michael I. Schimerlik, Dr Tory M. Hagen, Dr Christopher K. Mathews, and Dr George D. Pearson. <br />
** Textbook: ''Biochemistry'', Christopher K. Mathews, K. E. van Holde, and Kevin G. Ahern, Addison Wesley Longman, San Fransisco, 3rd Edition (2000). <br />
*** Biochemistry I: Energetics of Life (Thermodynamics, Chemical Reactions and Equilibrium), Nucleic Acids (Properties, Structure, Function), Protein Structure, Protein Function and Evolution, Protein Dynamics, Carbohydrates<br />
*** Biochemistry I-Laboratory: Biochemical Assays<br />
*** Biochemistry II: Lipids, Membranes, Cellular Transport, Enzymes, Carbohydrate Metabolism, Oxidative Processes, Photosynthesis, Lipid Metabolism, Metabolism of Nitrogenous Compounds, Nucleotide Metabolism, Metabolic Coordination/Control/Signal Transduction<br />
*** Biochemistry II-Laboratory: Molecular Biology<br />
*** Biochemistry III: Information Copying (Replication), Restriction, Repair, Recombination, Rearrangement, Amplification, Transcription, Translation, Expression<br />
*** Biochemistry III-Laboratory: Radioactivity<br />
* '''General Biology I, II, and III''' (+lab) <br />
** Professor: Richard M. Liebaert <br />
** Textbook: ''Biology'', Neil A. Campbell, The Benjamin/Cummings Publishing Company, Inc., Redwood City, 5th Edition (1999). <br />
* '''Molecular and Cellular Biology'''<br />
** Professor: Dr Charles R. Wert <br />
** Textbook: ''Essential Cell Biology'', Bruce Alberts, ''et. al.'', Garland Publishing, Inc. New York (1998). <br />
* '''Genetics'''<br />
** Professor: Dr Carol Rivin <br />
** Textbook: ''Genetics: From Genes to Genomes'', Leland H. Hartwell, ''et. al.'', McGraw-Hill Companies, Inc. Boston (2000). <br />
* '''Evolution'''<br />
** Professor: Dr Stephan J. Arnold <br />
** Textbook: ''Evolution: An Introduction'', Stephen C. Stearns and Rolf F. Hoekstra, Oxford University Press, Oxford (2000). <br />
* '''Physics with Calculus I, II, and III''' (+lab) <br />
** Professor: Dr Henri Jansen, Dr Carl A. Kocher, and Dr Rubin H. Landau <br />
** Textbook: ''Physics for Scientists and Engineers'', Saunders College Publishing, Philadelphia, 5th Edition (2000). <br />
* '''[[Biophysics (academic course)|Biophysics I, II, and III]]'''<br />
** Professor: Dr P. Shing Ho, Dr Victor Hsu, and Dr P. Andrew Karplus <br />
** Textbook: ''Physical Biochemistry'', Kensal E. van Holde, W. Curtis Johnson, and P. Shing Ho, Prentice-Hall, New Jersey (1998). <br />
*** Biophysics I: Biological Macromolecules (Interactions, Environments, Symmetry, Structure), Molecular Thermodynamics, Statistical Thermodynamics (Structural Transitions in Polypeptides and Proteins, Structural Transitions in Polynucleotides and DNA, Nonregular Structures)<br />
*** Biophysics II: Quantum Mechanics, Spectroscopy (Absorption, Linear and Circular Dichroism, Emisson, NMR)<br />
*** Biophysics III: Macromolecular Structure Determination, X-ray Crystallography, Hydrogen Exchange (Dynamics, Thermodynamics, Structure), Atomic Force Microscopy, Mass Spectroscopy, Protein Folding<br />
<br />
* '''Calculus I: Differential Calculus'''<br />
* '''Calculus II: Integral Calculus'''<br />
* '''Calculus III: Infinite Series and Sequences'''<br />
* '''Calculus IV: Vector Calculus'''<br />
* '''Calculus V: Differential Equations'''<br />
* '''Mathematical Biology'''<br />
* '''Chemical Information'''<br />
* '''Introduction to C Programming'''<br />
* '''Introduction to Web Authoring'''<br />
* '''Java Programming'''<br />
** Professor: Dr. Jens Thyge Kristensen<br />
** Textbook: ''Object-Oriented Software Development Using Java'', Xiaoping Jia, Addison-Wesley, 2nd Edition.<br />
<br />
==Academic Courses Taken (not relating to degrees)==<br />
<br />
* '''Communication: Interpersonal'''<br />
* '''Economics: Introduction to Microeconomics'''<br />
* '''English: Literature of the Western Civilisation'''<br />
* '''Writing: English Composition I'''<br />
* '''Writing: English Composition III'''<br />
* '''Geography: Geography of Africa and the Middle East'''<br />
* '''Geography: Population Geography'''<br />
* '''Geography: Immigration''' ([http://www.myedu.com/SIS-325-Immigration/course/s/360913/profile/ SIS 325])<br />
* '''History: History of the United States of America – Colonial Period'''<br />
* '''Philosophy: Ethics'''<br />
* '''Philosophy: Great Ideas in Philosophy'''<br />
* '''Photography: Analog Photography & Dark Room Development'''<br />
* '''Political Science: Introduction to U.S. Government & Politics'''<br />
* '''German: Year I, II, and III'''<br />
* '''German Conversation: Level I, II, and III'''<br />
* '''German Culture'''<br />
* '''Health: Emergency First Aid'''<br />
* '''Health: Lifetime Wellness'''<br />
* '''Health: Asbestos General Training''' (Online)<br />
* '''Physical Activity: Karate'''<br />
* '''Physical Activity: Jogging'''<br />
* '''Physical Activity: Weight Training'''<br />
* '''Introduction to Map & Compass Navigation'''<br />
* '''Theatre Arts: Improvisation'''<br />
<br />
==References==<br />
* '''Elhanan Borenstein, PhD''' &mdash; Assistant Professor of Genome Sciences, University of Washington.<br />
* '''Ethan A. Merritt, PhD''' &mdash; Research Associate Professor of Biochemistry and Biological Structure, University of Washington. Member of [http://www.mssgpp.org/ MSGPP].<br />
* '''Carlos J. Camacho, PhD''' &mdash; Associate Professor of Computational Biology, University of Pittsburgh.<br />
* '''P. Shing Ho, PhD''' &mdash; Professor and Chair of Biochemistry and Biophysics, Oregon State University.<br />
* '''Kevin Ahern, PhD''' &mdash; Senior Instructor of Biochemistry and Biophysics, Oregon State University.<br />
* '''Alexander Rich, PhD''' &mdash; William Thompson Sedgwick Professor of Biophysics, Massachusetts Institute of Technology.<br />
<br />
==External links==<br />
* [https://github.com/christophchamp GitHub] &mdash; examples of some of my OSS code (and sandbox)<br />
* [https://www.ohloh.net/accounts/Christoph Ohloh] &mdash; programming history / experience profile (from 2006 to 2010; out-of-date)<br />
* [http://sourceforge.net/users/christophchamp SourceForge] &mdash; links to code contributed to open source projects (out-of-date)<br />
* [https://code.google.com/p/christophchamp/ Google Code :: Christoph Champ] &mdash; examples of various code (and/or scripts) I have written for various projects over the years.<br />
* [https://google.com/+ChristophChamp Google+ Profile]<!--[http://profiles.google.com/christoph.champ Google+]--><br />
* [https://www.freebase.com/view/user/christophchamp Freebase] &mdash; contributor and developer.<br />
* [http://drupal.org/user/1891942 Drupal] &mdash; user and developer of SPARQL code (especially with respect to linking modules for [http://www.freebase.com Freebase] and [http://dbpedia.org/About DBpedia] linked-data).<br />
* ''[http://paper.li/ChristophChamp/1344361017 The Human Microbiome]'' &mdash; a weekly webzine I publish from aggregate sources.<br />
* [https://independent.academia.edu/ChristophChamp Christoph Champ] on Academia.edu<br />
<br />
===List and location of Online resumes (résumé / CV)===<br />
* [http://www.linkedin.com/in/christophchamp LinkedIn]<br />
* [http://www.researchgate.net/profile/Christoph_Champ/ ResearchGate]<br />
* [http://path.to/christophchamp Path.To]<br />
* [http://osrc.dfm.io/christophchamp/ Open Source Report Card]<!--<br />
* [http://www.epernicus.com/cc30 Epernicus]<br />
* [http://www.monster.com Monster.com]<br />
* [http://www.xing.com/ XING]<br />
* [http://christophchamp.emurse.com/ Emurse.com]<br />
* [http://www.jobdango.com/ jobdango.com]<br />
* [http://staff.washington.edu/champc/ University of Washington]--><br />
<br />
===Certifications===<br />
* [https://www.redhat.com/wapps/training/certification/verify.html?certNumber=140-074-097 Red Hat Certified System Administrator] (RHEL 6; Certificate #140-074-097)<br />
* AWS Certified Developer – Associate (Certificate #AWS-ADEV-3367)<br />
* AWS Certified Solutions Architect – Associate (Certificate #AWS-ASA-35064)<br />
* Google Cloud Certified – Professional Cloud Architect (Certification ID: VO1HIA)<br />
* Rancher Certified Consultant<br />
* HashiCorp Certified Terraform Associate<br />
* AWS Certified DevOps Engineer – Professional<br />
* Terraform: Certified HashiCorp Implementation Partner (CHIP)<br />
* SUSE Certified Administrator (SCA) in Rancher 2.5<br />
* SUSE Certified Administrator (SCA) in SUSE Rancher 2.6<br />
<br />
==Keywords==<br />
CV, résumé, resume, bioinformatics, computational biology, scientific programmer, Linux system administrator, cloud computing, Cloud Engineer, Cloud Architect, DevOps, SRE<br />
<br />
<div style="text-align:right;"><small>last update: 14 January 2023</small></div><br />
<br />
<< [[Christoph Champ|back to main article]]<br />
__NOEDITSECTION__<br />
[[Category:Academia]]<br />
[[Category:Personal]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Nginx&diff=8251Nginx2022-12-10T02:30:42Z<p>Christoph: </p>
<hr />
<div>'''Nginx''' is a web server which can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache.<br />
<br />
==Example Nginx configuration files==<br />
<br />
; Basic<br />
<pre><br />
$ cat nginx.conf<br />
<br />
user www www; ## Default: nobody<br />
worker_processes 5; ## Default: 1<br />
error_log logs/error.log;<br />
pid logs/nginx.pid;<br />
worker_rlimit_nofile 8192;<br />
<br />
events {<br />
worker_connections 4096; ## Default: 1024<br />
}<br />
<br />
http {<br />
include conf/mime.types;<br />
include /etc/nginx/proxy.conf;<br />
include /etc/nginx/fastcgi.conf;<br />
index index.html index.htm index.php;<br />
<br />
default_type application/octet-stream;<br />
log_format main '$remote_addr - $remote_user [$time_local] $status '<br />
'"$request" $body_bytes_sent "$http_referer" '<br />
'"$http_user_agent" "$http_x_forwarded_for"';<br />
access_log logs/access.log main;<br />
sendfile on;<br />
tcp_nopush on;<br />
server_names_hash_bucket_size 128; # this seems to be required for some vhosts<br />
<br />
server { # php/fastcgi<br />
listen 80;<br />
server_name domain1.com www.domain1.com;<br />
access_log logs/domain1.access.log main;<br />
root html;<br />
<br />
location ~ \.php$ {<br />
fastcgi_pass 127.0.0.1:1025;<br />
}<br />
}<br />
<br />
server { # simple reverse-proxy<br />
listen 80;<br />
server_name domain2.com www.domain2.com;<br />
access_log logs/domain2.access.log main;<br />
<br />
# serve static files<br />
location ~ ^/(images|javascript|js|css|flash|media|static)/ {<br />
root /var/www/virtual/big.server.com/htdocs;<br />
expires 30d;<br />
}<br />
<br />
# pass requests for dynamic content to rails/turbogears/zope, et al<br />
location / {<br />
proxy_pass http://127.0.0.1:8080;<br />
}<br />
}<br />
<br />
upstream big_server_com {<br />
server 127.0.0.3:8000 weight=5;<br />
server 127.0.0.3:8001 weight=5;<br />
server 192.168.0.1:8000;<br />
server 192.168.0.1:8001;<br />
}<br />
<br />
server { # simple load balancing<br />
listen 80;<br />
server_name big.server.com;<br />
access_log logs/big.server.access.log main;<br />
<br />
location / {<br />
proxy_pass http://big_server_com;<br />
}<br />
}<br />
}<br />
</pre><br />
<br />
; Using SSL/TLS<br />
<pre><br />
server {<br />
listen 80;<br />
server_name www.example.com example.com;<br />
<br />
# Redirect all traffic to SSL<br />
rewrite ^ https://$server_name$request_uri? permanent;<br />
}<br />
<br />
server {<br />
listen 443 ssl default_server;<br />
<br />
# enables SSLv3/TLSv1, but not SSLv2 which is weak and should no longer be used.<br />
ssl_protocols SSLv3 TLSv1;<br />
<br />
# disables all weak ciphers<br />
ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM;<br />
<br />
server_name www.example.com example.com;<br />
<br />
## Access and error logs.<br />
access_log /var/log/nginx/access.log;<br />
error_log /var/log/nginx/error.log info;<br />
<br />
## Keep alive timeout set to a greater value for SSL/TLS.<br />
keepalive_timeout 75 75;<br />
<br />
## See the keepalive_timeout directive in nginx.conf.<br />
## Server certificate and key.<br />
ssl on;<br />
ssl_certificate /etc/ssl/certs/example.com-rapidssl.crt;<br />
ssl_certificate_key /etc/ssl/private/example.com-rapidssl.key;<br />
ssl_session_timeout 5m;<br />
<br />
## Strict Transport Security header for enhanced security. See<br />
## http://www.chromium.org/sts. Here it is set it to 2 hours;<br />
## set it to whichever age you want.<br />
add_header Strict-Transport-Security "max-age=7200";<br />
<br />
root /var/www/example.com/;<br />
index index.php;<br />
}<br />
</pre><br />
<br />
==External links==<br />
* [https://nginx.org/ Official website]<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:World Wide Web]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Docker&diff=8250Docker2022-12-02T22:34:41Z<p>Christoph: /* Examples */</p>
<hr />
<div>'''Docker''' is an open-source project that automates the deployment of applications inside software containers. Quote of features from docker web page:<br />
:Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.<ref>https://www.docker.com/what-docker</ref><br />
<br />
==Introduction==<br />
<br />
''Note: The following is based on content found on the official [https://www.docker.com/what-container Docker website], [[:wikipedia:Docker (software)|Wikipedia]], and various other locations.''<br />
<br />
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.<br />
<br />
; Lightweight : Docker containers running on a single machine share that machine's operating system kernel; they start instantly and use less compute and RAM. Images are constructed from filesystem layers and share common files. This minimizes disk usage and image downloads are much faster.<br />
; Standard : Docker containers are based on open standards and run on all major Linux distributions, Microsoft Windows, and on any infrastructure including VMs, bare-metal and in the cloud.<br />
; Secure : Docker containers isolate applications from one another and from the underlying infrastructure. Docker provides the strongest default isolation to limit app issues to a single container instead of the entire machine.<br />
<br />
As actions are done to a Docker base image, union file-system layers are created and documented, such that each layer fully describes how to recreate an action. This strategy enables Docker's lightweight images, as only layer updates need to be propagated (compared to full VMs, for example).<br />
<br />
Building on top of facilities provided by the Linux kernel (primarily cgroups and namespaces), a Docker container, unlike a virtual machine, does not require or include a separate operating system. Instead, it relies on the kernel's functionality and uses resource isolation for CPU and memory, and separate namespaces to isolate the application's view of the operating system. Docker accesses the Linux kernel's virtualization features directly using the <code>libcontainer</code> library (written in the Go programming language).<br />
<br />
===Comparing Containers and Virtual Machines===<br />
<br />
Containers and virtual machines have similar resource isolation and allocation benefits, but function differently because containers virtualize the operating system instead of hardware. Containers are more portable and efficient.<br />
<br />
; Virtual Machines : Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries and libraries - taking up tens of GBs. VMs can also be slow to boot.<br />
; Containers : Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. Containers take up less space than VMs (container images are typically tens of MBs in size), and start almost instantly.<br />
<br />
===Components===<br />
<br />
The Docker software as a service offering consists of three components:<br />
<br />
; Software : The Docker daemon, called "<code>dockerd</code>" is a persistent process that manages Docker containers and handles container objects. The daemon listens for API requests sent by the Docker Engine API. The Docker client, which identifies itself as "<code>docker</code>", allows users to interact with Docker through CLI. It uses the Docker REST API to communicate with one or more Docker daemons.<br />
; Objects : Docker objects refer to different entities used to assemble an application in Docker. The main Docker objects are images, containers, and services.<br />
:* A Docker container is a standardized, encapsulated environment that runs applications. A container is managed using the Docker API or CLI.<br />
:* A Docker image is a read-only template used to build containers. Images are used to store and ship applications.<br />
:* A Docker service allows containers to be scaled across multiple Docker daemons. The result is known as a "swarm", cooperating daemons that communicate through the Docker API.<br />
; Registries : A Docker registry is a repository for Docker images. Docker clients connect to registries to download ("pull") images for use or upload ("push") images that they have built. Registries can be public or private. Two main public registries are Docker Hub and Docker Cloud. Docker Hub is the default registry where Docker looks for images.<br />
<br />
==Docker commands==<br />
<br />
I will provide detailed examples on all of the following commands throughout this article.<br />
<br />
; Basics<br />
<br />
The following are the most common Docker commands (i.e., the ones you will most likely use the most day-to-day):<br />
<br />
* Show all running containers:<br />
$ docker ps<br />
* Show all containers (including stopped and failed ones):<br />
$ docker ps -a<br />
* Show all images in your local repository:<br />
$ docker images<br />
* Create an image based on the instructions in a <code>Dockerfile</code>:<br />
$ docker build<br />
* Start a container from an image (either from your local repository or from a remote repository {e.g., Docker Hub}):<br />
$ docker run<br />
* Remove/delete all ''stopped''/''failed'' containers (leaves running containers alone):<br />
$ docker rm $(docker ps -a -q)<br />
<br />
===Container commands===<br />
<br />
; Container lifecycle<br />
<br />
* Create a container but do not start it:<br />
$ docker create<br />
* Rename a container:<br />
$ docker rename<br />
* Create ''and'' start a container in one operation:<br />
$ docker run<br />
* Delete a container:<br />
$ docker rm<br />
* Update a container's resource limits:<br />
$ docker update<br />
<br />
; Starting and stopping containers<br />
<br />
* Start a container:<br />
$ docker start<br />
* Stop a running container:<br />
$ docker stop<br />
* Stop and start start a container:<br />
$ docker restart<br />
* Pause a running container ("freeze" it in place):<br />
$ docker pause<br />
* Un-pause a paused container:<br />
$ docker unpause<br />
* Attach/connect to a running container:<br />
$ docker attach<br />
* Block until running container stops (and print exit code):<br />
$ docker wait<br />
* Send <code>SIGKILL</code> to a running container:<br />
$ docker kill<br />
<br />
; Information<br />
<br />
* Show all ''running'' containers:<br />
$ docker ps<br />
* Get the logs for a given container:<br />
$ docker logs<br />
* Get all of the metadata about a container (e.g., IP address, etc.):<br />
$ docker inspect<br />
* Get real-time events from Docker Engine (e.g., start/stop containers, attach, create, etc.):<br />
$ docker events<br />
* Get the public-facing ports of a given container:<br />
$ docker port<br />
* Show running processes in a given container:<br />
$ docker top<br />
* Show a given container's resource usage statistics:<br />
$ docker stats<br />
* Show changed files in the container's filesystem (i.e., those changed from the original base image):<br />
$ docker diff<br />
<br />
; Miscellaneous<br />
<br />
* Get the environment variables for a given container:<br />
$ docker run ubuntu env<br />
* IP address of host machine:<br />
$ ip -4 -o addr show eth0<br />
2: eth0 inet 10.0.0.166/23<br />
* IP address of a container:<br />
$ docker run ubuntu ip -4 -o addr show eth0<br />
2: eth0 inet 172.17.0.2/16<br />
<br />
===Image commands===<br />
<br />
; Lifecycle<br />
* Show all images in your local repository:<br />
$ docker images<br />
* Create an image from a tarball:<br />
$ docker import<br />
* Create an image from a <code>Dockerfile</code><br />
$ docker build<br />
* Create an image from a container (note: it will pause the container, if it is running, during the commit process):<br />
$ docker commit<br />
* Remove/delete an image:<br />
$ docker rmi<br />
* Load an image from a tarball as STDIN (including images and tags):<br />
$ docker load<br />
* Save an image to a tarball (streamed to STDOUT with all parents lays, tags, and versions):<br />
$ docker save<br />
<br />
; Info<br />
<br />
* Show the history of an image:<br />
$ docker history<br />
* Tag an image:<br />
$ docker tag<br />
<br />
==Dockerfile directives==<br />
<br />
=== USER ===<br />
<pre><br />
$ cat << EOF > Dockerfile<br />
# Non-privileged user entry<br />
FROM centos:latest<br />
MAINTAINER xtof@example.com<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
USER xtof<br />
EOF<br />
</pre><br />
''Note: The use of <code>MAINTAINER</code> has been deprecated in newer versions of Docker. You should use <code>LABEL</code> instead, as it is much more flexible and its key/values show up in <code>docker inspect</code>. From here forward, I will only use <code>LABEL</code>.''<br />
<br />
$ docker build -t centos7/nonroot:v1 .<br />
$ docker exec -it <container_name> /bin/bash<br />
<br />
We are user "xtof" and are unable to become root. The workaround (i.e., how to become root) is like so:<br />
<br />
$ docker exec -u 0 -it <container_name> /bin/bash<br />
<br />
''NOTE: For the remainder of this section, I will omit the <code>$ cat << EOF > Dockerfile</code> part in the examples for brevity.''<br />
<br />
=== RUN ===<br />
<br />
Notes on the order of execution<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
USER xtof<br />
<br />
RUN echo "export PATH=/path/to/my/app:$PATH" >> /etc/bashrc<br />
</pre><br />
<br />
$ docker build -t centos7/config:v1 .<br />
...<br />
/bin/sh: /etc/bashrc: Permission denied<br />
<br />
The order of execution matters! Prior to the directive <code>USER xtof</code>, the user was root. After that directive, the user is now xtof, who does not have super-user privileges. Move the <code>RUN echo ...</code> directive to before the <code>USER xtof</code> directive for a successful build.<br />
<br />
=== ENV ===<br />
''Note: The following is a _terrible_ way of building a container. I am purposely doing it this way so I can show you a much better way later (see below).''<br />
<br />
* Build a CentOS 7 Docker image with Java 8 installed:<br />
<pre><br />
# SEE: https://gist.github.com/P7h/9741922 for various Java versions<br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN yum update -y<br />
RUN yum install -y net-tools wget<br />
<br />
RUN echo "SETTING UP JAVA"<br />
# The tarball method:<br />
#RUN cd ~ && wget --no-cookies --no-check-certificate \<br />
# --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \<br />
# "http://download.oracle.com/otn-pub/java/jdk/8u91-b14/jdk-8u91-linux-x64.tar.gz"<br />
#RUN tar xzvf jdk-8u91-linux-x64.tar.gz<br />
#RUN mv jdk1.8.0_91 /opt<br />
#ENV JAVA_HOME /opt/jdk1.8.0_91/<br />
<br />
# The rpm method:<br />
RUN cd ~ && wget --no-cookies --no-check-certificate \<br />
--header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \<br />
"http://download.oracle.com/otn-pub/java/jdk/8u161-b12/2f38c3b165be4555a1fa6e98c45e0808/jdk-8u161-linux-x64.rpm"<br />
RUN yum localinstall -y /root/jdk-8u161-linux-x64.rpm<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
USER xtof<br />
<br />
# User specific environment variable<br />
RUN cd ~ && echo "export JAVA_HOME=/usr/java/jdk1.8.0_161/jre" >> ~/.bashrc<br />
# Global (system-wide) environment variable<br />
ENV JAVA_BIN /usr/java/jdk1.8.0_161/jre/bin<br />
</pre><br />
<br />
$ docker build -t centos7/java8:v1 .<br />
<br />
=== CMD vs. RUN ===<br />
<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
CMD ["echo", "Hello from within my container"]<br />
</pre><br />
<br />
The <code>CMD</code> directive ''only'' executes when the container is started, whereas the <code>RUN</code> directive is executed during the build of the image.<br />
<br />
$ docker build -t centos7/echo:v1 .<br />
$ docker run centos7/echo:v1<br />
Hello from within my container<br />
<br />
The container starts, echos out that message, then exits.<br />
<br />
=== ENTRYPOINT ===<br />
<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN useradd -ms /bin/bash xtof<br />
ENTRYPOINT "This command will display this message on EVERY container that is run from it"<br />
</pre><br />
<br />
$ docker build -t centos7/entry:v1 .<br />
$ docker run centos7/entry:v1<br />
This command will display this message on EVERY container that is run from it<br />
$ docker run centos7/entry:v1 /bin/echo "Can you see me?"<br />
This command will display this message on EVERY container that is run from it<br />
$ docker run centos7/echo:v1 /bin/echo "Can you see me?"<br />
Can you see me?<br />
<br />
Note the difference.<br />
<br />
=== EXPOSE ===<br />
<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN yum update -y<br />
RUN yum install -y httpd net-tools<br />
<br />
RUN echo "This is a custom index file built during the image creation" > /var/www/html/index.html<br />
<br />
ENTRYPOINT apachectl -DFOREGROUND # BAD WAY TO DO THIS!<br />
</pre><br />
<br />
$ docker build -t centos7/apache:v1 .<br />
$ docker run -d --name webserver centos7/apache:v1<br />
$ docker exec webserver /bin/cat /var/www/html/index.html<br />
This is a custom index file built during the image creation<br />
$ docker inspect webserver -f '<nowiki>{{.NetworkSettings.IPAddress}}</nowiki>' # => 172.17.0.6<br />
#~OR~<br />
$ docker inspect webserver | jq -crM '.[] | .NetworkSettings.IPAddress' # => 172.17.0.6<br />
$ curl 172.17.0.6<br />
This is a custom index file built during the image creation<br />
$ curl -sI 172.17.0.6 | awk '/^HTTP|^Server/{print}'<br />
HTTP/1.1 200 OK<br />
Server: Apache/2.4.6 (CentOS)<br />
$ time docker stop webserver<br />
real 0m10.275s # <- notice how long it took to stop the container<br />
user 0m0.008s<br />
sys 0m0.000s<br />
$ docker rm webserver<br />
<br />
It took ~10 seconds to stop the above container. This is because of the way we are (incorrectly) using <code>ENTRYPOINT</code>. The <code>SIGTERM</code> signal when running <code>`docker stop webserver`</code> actually timed out instead of exiting gracefully. A much better method is shown below, which ''will'' exit gracefully and in less than 300 ms.<br />
<br />
* Expose ports from the CLI<br />
$ docker run -d --name webserver -p 8080:80 centos7/apache:v1<br />
$ curl localhost:8080<br />
This is a custom index file built during the image creation<br />
$ docker stop webserver && docker rm webserver<br />
<br />
* Explicitly expose a port in the Docker image:<br />
<pre><br />
FROM centos:latest<br />
LABEL maintainer="xtof@example.com"<br />
<br />
RUN yum update -y && \<br />
yum install -y httpd net-tools && \<br />
yum autoremove -y && \<br />
echo "This is a custom index file built during the image creation" > /var/www/html/index.html<br />
<br />
EXPOSE 80<br />
<br />
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]<br />
</pre><br />
<br />
$ docker build -t centos7/apache:v1 .<br />
$ docker run -d --rm --name webserver -P centos7/apache:v1<br />
$ docker container ls --format '<nowiki>{{.Names}} {{.Ports}}</nowiki>'<br />
webserver 0.0.0.0:32769->80/tcp<br />
#~OR~<br />
$ docker port webserver | cut -d: -f2<br />
32769<br />
#~OR~<br />
$ docker inspect webserver | jq -crM '[.[] | .NetworkSettings.Ports."80/tcp"[] | .HostPort] | .[]'<br />
32769<br />
$ curl localhost:32769<br />
This is a custom index file built during the image creation<br />
$ time docker stop webserver<br />
real 0m0.283s<br />
user 0m0.004s<br />
sys 0m0.008s<br />
<br />
Note that I passed <code>--rm</code> to the <code>`docker run`</code> command so that the container will be removed when I stop the container. Also note how much faster the container stopped (~300ms vs. 10 seconds above).<br />
<br />
==Container volume management==<br />
<br />
$ docker run -it --name voltest -v /mydata centos:latest /bin/bash<br />
[root@bffdcb88c485 /]# df -h<br />
Filesystem Size Used Avail Use% Mounted on<br />
none 213G 173G 30G 86% /<br />
tmpfs 7.8G 0 7.8G 0% /dev<br />
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup<br />
/dev/mapper/ubuntu--vg-root 213G 173G 30G 86% /mydata<br />
shm 64M 0 64M 0% /dev/shm<br />
tmpfs 7.8G 0 7.8G 0% /sys/firmware<br />
[root@bffdcb88c485 /]# echo "testing" >/mydata/mytext.txt<br />
$ docker inspect voltest | jq -crM '.[] | .Mounts[].Source'<br />
/var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data<br />
$ sudo cat /var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data/mytext.txt<br />
testing<br />
$ sudo /bin/bash -c \<br />
"echo 'this is from the host OS' >/var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data/host.txt"<br />
[root@bffdcb88c485 /]# cat /mydata/host.txt <br />
this is from the host OS<br />
<br />
* Cleanup<br />
$ docker rm voltest<br />
$ docker volume rm 2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b<br />
<br />
* Mount host's current working directory inside container:<br />
$ echo "my config" >my.conf<br />
$ echo "my message" >message.txt<br />
$ echo "aerwr3adf" >app.bin<br />
$ chmod +x app.bin<br />
$ docker run -it --name voltest -v ${PWD}:/mydata centos:latest /bin/bash<br />
[root@f5f34ccb54fb /]# ls -l /mydata/<br />
total 24<br />
-rwxrwxr-x 1 1000 1000 10 Mar 8 19:29 app.bin<br />
-rw-rw-r-- 1 1000 1000 11 Mar 8 19:29 message.txt<br />
-rw-rw-r-- 1 1000 1000 10 Mar 8 19:29 my.conf<br />
[root@f5f34ccb54fb /]# touch /mydata/foobar<br />
$ ls -l ${PWD}<br />
total 24<br />
-rwxrwxr-x 1 xtof xtof 10 Mar 8 11:29 app.bin<br />
-rw-r--r-- 1 root root 0 Mar 8 11:36 foobar<br />
-rw-rw-r-- 1 xtof xtof 11 Mar 8 11:29 message.txt<br />
-rw-rw-r-- 1 xtof xtof 10 Mar 8 11:29 my.conf<br />
$ docker rm voltest<br />
<br />
==Images==<br />
<br />
===Saving and loading images===<br />
<br />
$ docker pull centos:latest<br />
$ docker run -it centos:latest /bin/bash<br />
[root@29fad368048c /]# yum update -y<br />
[root@29fad368048c /]# echo xtof >/root/built_by.txt<br />
$ docker commit reverent_elion centos:xtof<br />
$ docker rm reverent_elion<br />
$ docker images<br />
REPOSITORY TAG IMAGE ID CREATED SIZE<br />
centos xtof e0c8bd35ba50 3 seconds ago 463MB<br />
centos latest 980e0e4c79ec 1 minute ago 197MB<br />
$ docker history centos:xtof<br />
IMAGE CREATED CREATED BY SIZE<br />
e0c8bd35ba50 27 seconds ago /bin/bash 266MB <br />
980e0e4c79ec 18 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B <br />
<missing> 18 months ago /bin/sh -c #(nop) LABEL name=CentOS Base ... 0B <br />
<missing> 18 months ago /bin/sh -c #(nop) ADD file:e336b45186086f7... 197MB <br />
<missing> 18 months ago /bin/sh -c #(nop) MAINTAINER <nowiki>https://gith...</nowiki> 0B<br />
<br />
* Save the original <code>centos:latest</code> image we pulled from Docker Hub:<br />
$ docker save --output centos-latest.tar centos:latest<br />
<br />
Note that the above command essentially tars up the contents of the image found in <code>/var/lib/docker/image</code> directory.<br />
<br />
$ tar tvf centos-latest.tar <br />
-rw-r--r-- 0/0 2309 2016-09-06 14:10 980e0e4c79ec933406e467a296ce3b86685e6b42eed2f873745e6a91d718e37a.json<br />
drwxr-xr-x 0/0 0 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/<br />
-rw-r--r-- 0/0 3 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/VERSION<br />
-rw-r--r-- 0/0 1391 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/json<br />
-rw-r--r-- 0/0 204305920 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/layer.tar<br />
-rw-r--r-- 0/0 202 1969-12-31 16:00 manifest.json<br />
-rw-r--r-- 0/0 89 1969-12-31 16:00 repositories<br />
<br />
* Save space by compressing the tar file:<br />
$ gzip centos-latest.tar # .tar -> 195M; .tar.gz -> 68M<br />
<br />
* Delete the original <code>centos:latest</code> image:<br />
$ docker rmi centos:latest<br />
<br />
* Restore (or load) the image back to our local repository:<br />
$ docker load --input centos-latest.tar.gz<br />
<br />
===Tagging images===<br />
<br />
* List our current images:<br />
$ docker images<br />
REPOSITORY TAG IMAGE ID CREATED SIZE<br />
centos xtof e0c8bd35ba50 About an hour ago 463MB<br />
<br />
* Tag the above image:<br />
$ docker tag e0c8bd35ba50 xtof/centos:v1<br />
$ docker images<br />
REPOSITORY TAG IMAGE ID CREATED SIZE<br />
centos xtof e0c8bd35ba50 About an hour ago 463MB<br />
xtof/centos v1 e0c8bd35ba50 About an hour ago 463MB<br />
<br />
Note that we did not create a new image, we just created a new tag of the same/original <code>centos:xtof</code> image.<br />
<br />
Note: The maximum number of characters in a tag is 128.<br />
<br />
==Docker networking==<br />
<br />
===Default networks===<br />
$ ip addr show docker0<br />
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default <br />
link/ether 02:42:c0:75:70:13 brd ff:ff:ff:ff:ff:ff<br />
inet 172.17.0.1/16 scope global docker0<br />
valid_lft forever preferred_lft forever<br />
inet6 fe80::42:c0ff:fe75:7013/64 scope link <br />
valid_lft forever preferred_lft forever<br />
#~OR~<br />
$ ifconfig docker0<br />
docker0 Link encap:Ethernet HWaddr 02:42:c0:75:70:13 <br />
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0<br />
inet6 addr: fe80::42:c0ff:fe75:7013/64 Scope:Link<br />
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1<br />
RX packets:420654 errors:0 dropped:0 overruns:0 frame:0<br />
TX packets:1162975 errors:0 dropped:0 overruns:0 carrier:0<br />
collisions:0 txqueuelen:0 <br />
RX bytes:85851647 (85.8 MB) TX bytes:1196235716 (1.1 GB)<br />
<br />
$ docker network inspect bridge | jq '.[] | .IPAM.Config[].Subnet'<br />
"172.17.0.0/16"<br />
So, the usable range of IP addresses in our 172.17.0.0/16 subnet is: 172.17.0.1 - 172.17.255.254<br />
<br />
$ docker network ls<br />
NETWORK ID NAME DRIVER SCOPE<br />
bf831059febc bridge bridge local<br />
266f6df5c44e host host local<br />
ce79e4043a20 none null local<br />
$ docker ps -q | wc -l<br />
#~OR~<br />
$ docker container ls --format '<nowiki>{{.Names}}</nowiki>' | wc -l<br />
4 # => 4 running containers<br />
$ docker network inspect bridge | jq '.[] | .Containers[].IPv4Address'<br />
"172.17.0.2/16"<br />
"172.17.0.5/16"<br />
"172.17.0.4/16"<br />
"172.17.0.3/16"<br />
The output from the last command are the IP addresses of the 4 containers currently running on my host.<br />
<br />
===Custom networks===<br />
* Create a Docker network<br />
$ man docker-network-create # for details<br />
$ docker network create --subnet 10.1.0.0/16 --gateway 10.1.0.1 --ip-range=10.1.4.0/24 \<br />
--driver=bridge --label=host4network br04<br />
<br />
* Use the above network with a given container:<br />
$ docker run -it --name net-test --net br04 centos:latest /bin/bash<br />
<br />
* Assign a static IP to a given container in the above (user created) network:<br />
$ docker run -it --name net-test --net br04 --ip 10.1.4.100 centos:latest /bin/bash<br />
<br />
Note: You can ''only'' assign static IPs to user created networks (i.e., you ''cannot'' assign them to the default "bridge" network).<br />
<br />
==Monitoring==<br />
<br />
$ docker top <container_name><br />
$ docker stats <container_name><br />
<br />
===Logs===<br />
<br />
* Fetch logs of a given container:<br />
$ docker logs <container_name><br />
<br />
* Fetch logs of a given container prefixed with timestamps (UTC format by default):<br />
$ docker logs --timestamps <container_name><br />
<br />
===Events===<br />
$ docker events<br />
$ docker events --since '1h'<br />
$ docker events --since '2018-03-08T16:00'<br />
$ docker events --filter event=attach<br />
$ docker events --filter event=destroy<br />
$ docker events --filter event=attach --filter event=die --filter event=stop<br />
<br />
==Cleanup==<br />
<br />
* Check local system disk usage:<br />
<pre><br />
$ docker system df<br />
TYPE TOTAL ACTIVE SIZE RECLAIMABLE<br />
Images 53 3 16.52GB 15.9GB (96%)<br />
Containers 3 1 438.9MB 0B (0%)<br />
Local Volumes 16 2 2.757GB 2.628GB (95%)<br />
Build Cache 0 0 0B 0B<br />
</pre><br />
<br />
Note: Use <code>docker system df --verbose</code> to get even more details.<br />
<br />
* Delete all stopped containers at once and reclaim the disk space they are using:<br />
$ docker container prune<br />
<br />
* Remove all containers (both the running ones and the stopped ones):<br />
<pre><br />
# Old method:<br />
$ docker rm -f $(docker ps -aq)<br />
# New method:<br />
$ docker container rm -f $(docker container ls -aq)<br />
</pre><br />
Note: It is often useful to use the <code>--rm</code> flag when running a container so that it is automatically removed when its PID 1 process is stopped, thus releasing unused disk immediately.<br />
<br />
* Cleanup everything all at one ('''CAREFUL!'''):<br />
<pre><br />
$ docker system prune<br />
WARNING! This will remove:<br />
- all stopped containers<br />
- all networks not used by at least one container<br />
- all dangling images<br />
- all dangling build cache<br />
Are you sure you want to continue? [y/N]<br />
</pre><br />
<br />
==Examples==<br />
<br />
===Simple Nginx server===<br />
<br />
* Create an index.html file:<br />
<pre><br />
$ mkdir html<br />
$ cat << EOF >html/index.html<br />
Hello from Docker<br />
EOF<br />
</pre><br />
<br />
* Create a Dockerfile:<br />
<pre><br />
FROM nginx<br />
COPY html /usr/share/nginx/html<br />
</pre><br />
<br />
* Build the image:<br />
$ docker build -t test-nginx .<br />
<br />
* Start up container, using image built above:<br />
$ docker run --name check-nginx -d -p 8080:80 test-nginx<br />
<br />
* Check that it works:<br />
$ curl <nowiki>http://localhost:8080</nowiki><br />
Hello from Docker<br />
<br />
===Connecting two containers===<br />
<br />
In this example, we will start up a Postgres container and then start up another container and make a connection to the original Postgres container:<br />
<br />
$ docker pull postgres<br />
$ docker run --name test-postgres -e POSTGRES_PASSWORD=mypassword -d postgres<br />
$ docker run -it --rm --link test-postgres:postgres postgres psql -h postgres -U postgres<br />
<pre><br />
Password for user postgres:<br />
psql (11.0 (Debian 11.0-1.pgdg90+2))<br />
Type "help" for help.<br />
<br />
postgres=# SELECT 1;<br />
?column?<br />
----------<br />
1<br />
(1 row)<br />
<br />
postgres=# \q<br />
</pre><br />
<br />
Connection was successful!<br />
<br />
===Support for various hardware platforms===<br />
<br />
NOTE: If your image is being created on an M1 chip (ARM64) but you want to execute the container on an AMD64 chip, then use <code>FROM - platform=linux/amd64</code> on your Docker image so it can be shipped anywhere. For example:<br />
<pre><br />
FROM node:current-alpine3.15<br />
#FROM - platform=linux/amd64 node:current-alpine3.15<br />
WORKDIR /app<br />
ADD . /app<br />
RUN npm install<br />
#RUN npm install express<br />
EXPOSE 3000<br />
CMD ["npm", "start"]<br />
</pre><br />
<br />
==Docker compose==<br />
<br />
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the [https://docs.docker.com/compose/overview/#features list of features].<br />
<br />
Using Compose is basically a three-step process:<br />
# Define your app's environment with a <code>Dockerfile</code>. so it can be reproduced anywhere.<br />
# Define the services that make up your app in <code>docker-compose.yml</code> so they can be run together in an isolated environment.<br />
# Run <code>docker-compose up</code> and Compose starts and runs your entire app.<br />
<br />
===Basic example===<br />
<br />
''Note: This is based off of [https://docs.docker.com/compose/gettingstarted/ this article].''<br />
<br />
In this basic example, we will build a simple Python web application running on Docker Compose. The application uses the Flask framework and maintains a hit counter in Redis.<br />
<br />
''Note: This section assumes you already have Docker Engine and [https://docs.docker.com/compose/install/#install-compose Docker Compose] installed.''<br />
<br />
* Create a directory for the project:<br />
$ mkdir compose-test && cd $_<br />
<br />
* Create a file called <code>app.py</code> in your project directory and paste this in:<br />
<pre><br />
import time<br />
import redis<br />
from flask import Flask<br />
<br />
<br />
app = Flask(__name__)<br />
cache = redis.Redis(host='redis', port=6379)<br />
<br />
<br />
def get_hit_count():<br />
retries = 5<br />
while True:<br />
try:<br />
return cache.incr('hits')<br />
except redis.exceptions.ConnectionError as exc:<br />
if retries == 0:<br />
raise exc<br />
retries -= 1<br />
time.sleep(0.5)<br />
<br />
<br />
@app.route('/')<br />
def hello():<br />
count = get_hit_count()<br />
return 'Hello World! I have been seen {} times.\n'.format(count)<br />
<br />
if __name__ == "__main__":<br />
app.run(host="0.0.0.0", debug=True)<br />
</pre><br />
<br />
In this example, <code>redis</code> is the hostname of the redis container on the application's network. We use the default port for Redis: <code>6379</code>.<br />
<br />
* Create another file called <code>requirements.txt</code> in your project directory and paste this in:<br />
flask<br />
redis<br />
<br />
* Create a Dockerfile<br />
*: This Dockerfile will be used to build an image that contains all the dependencies the Python application requires, including Python itself.<br />
<pre><br />
FROM python:3.4-alpine<br />
ADD . /code<br />
WORKDIR /code<br />
RUN pip install -r requirements.txt<br />
CMD ["python", "app.py"]<br />
</pre><br />
<br />
* Create a file called <code>docker-compose.yml</code> in your project directory and paste the following:<br />
<pre><br />
version: '3'<br />
services:<br />
web:<br />
build: .<br />
ports:<br />
- "5000:5000"<br />
redis:<br />
image: "redis:alpine"<br />
</pre><br />
<br />
* Build and run this app with Docker Compose:<br />
$ docker-compose up<br />
<br />
Compose pulls a Redis image, builds an image for your code, and starts the services you defined. In this case, the code is statically copied into the image at build time.<br />
<br />
* Test the application:<br />
$ curl localhost:5000<br />
Hello World! I have been seen 1 times.<br />
<br />
$ for i in $(seq 1 10); do curl -s localhost:5000; done<br />
Hello World! I have been seen 2 times.<br />
Hello World! I have been seen 3 times.<br />
Hello World! I have been seen 4 times.<br />
Hello World! I have been seen 5 times.<br />
Hello World! I have been seen 6 times.<br />
Hello World! I have been seen 7 times.<br />
Hello World! I have been seen 8 times.<br />
Hello World! I have been seen 9 times.<br />
Hello World! I have been seen 10 times.<br />
Hello World! I have been seen 11 times.<br />
<br />
* List containers:<br />
<pre><br />
$ docker-compose ps<br />
Name Command State Ports <br />
-------------------------------------------------------------------------------------<br />
compose-test_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp <br />
compose-test_web_1 python app.py Up 0.0.0.0:5000->5000/tcp<br />
</pre><br />
<br />
* Display the running processes:<br />
<pre><br />
$ docker-compose top<br />
compose-test_redis_1<br />
UID PID PPID C STIME TTY TIME CMD <br />
--------------------------------------------------------------------<br />
systemd+ 29401 29367 0 15:28 ? 00:00:00 redis-server <br />
<br />
compose-test_web_1<br />
UID PID PPID C STIME TTY TIME CMD <br />
--------------------------------------------------------------------------------<br />
root 29407 29373 0 15:28 ? 00:00:00 python app.py <br />
root 29545 29407 0 15:28 ? 00:00:00 /usr/local/bin/python app.py<br />
</pre><br />
<br />
* Shutdown app:<br />
$ Ctrl+C<br />
#~OR~<br />
$ docker-compose down<br />
<br />
==Install docker==<br />
<br />
===Debian-based distros===<br />
<br />
; Ubuntu 16.04 (Xenial Xerus)<br />
''Note: For this install, I will be using Ubuntu 16.04 LTS (Xenial Xerus). Docker requires a 64-bit version of Ubuntu as well as a kernel version equal to or greater than 3.10. My system satisfies both requirements.''<br />
<br />
* Setup the docker repo to install from:<br />
$ sudo apt-get update -y<br />
$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D<br />
$ echo "deb <nowiki>https://apt.dockerproject.org/repo ubuntu-xenial main</nowiki>" | sudo tee /etc/apt/sources.list.d/docker.list<br />
$ sudo apt-get update -y<br />
<br />
Make sure you are about to install from the Docker repo instead of the default Ubuntu 16.04 repo:<br />
<br />
$ apt-cache policy docker-engine<br />
<br />
The output of the above command show look something like the following:<br />
<pre><br />
docker-engine:<br />
Installed: (none)<br />
Candidate: 17.05.0~ce-0~ubuntu-xenial<br />
Version table:<br />
17.05.0~ce-0~ubuntu-xenial 500<br />
500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages<br />
17.04.0~ce-0~ubuntu-xenial 500<br />
500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages<br />
...<br />
</pre><br />
<br />
* Install docker:<br />
$ sudo apt-get install -y docker-engine<br />
<br />
; Ubuntu 18.04 (Bionic Beaver)<br />
<br />
$ sudo apt update<br />
$ sudo apt install -y apt-transport-https ca-certificates curl software-properties-common<br />
$ curl -fsSL <nowiki>https://download.docker.com/linux/ubuntu/gpg</nowiki> | sudo apt-key add -<br />
$ sudo add-apt-repository "deb [arch=amd64] <nowiki>https://download.docker.com/linux/ubuntu</nowiki> $(lsb_release -cs) stable"<br />
$ sudo apt update<br />
$ apt-cache policy docker-ce<br />
<pre><br />
docker-ce:<br />
Installed: (none)<br />
Candidate: 5:18.09.0~3-0~ubuntu-bionic<br />
Version table:<br />
5:18.09.0~3-0~ubuntu-bionic 500<br />
500 <nowiki>https://download.docker.com/linux/ubuntu</nowiki> bionic/stable amd64 Packages<br />
</pre><br />
<br />
$ sudo apt install docker-ce -y<br />
$ sudo systemctl status docker<br />
<pre><br />
● docker.service - Docker Application Container Engine<br />
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)<br />
Active: active (running) since Tue 2018-12-04 13:40:36 PST; 4s ago<br />
Docs: https://docs.docker.com<br />
Main PID: 6134 (dockerd)<br />
Tasks: 16<br />
CGroup: /system.slice/docker.service<br />
└─6134 /usr/bin/dockerd -H unix://<br />
</pre><br />
<br />
===Red Hat-based distros===<br />
''Note: For this install, I will be using CentOS 7 (release 7.2.1511). Docker requires a 64-bit version of CentOS as well as a kernel version equal to or greater than 3.10. My system satisfies both requirements.''<br />
<br />
* Install Docker (the fast way):<br />
$ sudo yum update -y<br />
$ curl -fsSL <nowiki>https://get.docker.com/</nowiki> | sh<br />
<br />
* Install Docker (via a yum repo):<br />
$ sudo yum update -y<br />
$ sudo pip install docker-py<br />
$ cat << EOF > /etc/yum.repos.d/docker.repo<br />
[dockerrepo]<br />
name=Docker Repository<br />
baseurl=<nowiki>https://yum.dockerproject.org/repo/main/centos/7/</nowiki><br />
enabled=1<br />
gpgcheck=1<br />
gpgkey=<nowiki>https://yum.dockerproject.org/gpg</nowiki><br />
EOF<br />
$ sudo rpm -vv --import <nowiki>https://yum.dockerproject.org/gpg</nowiki><br />
$ sudo yum update -y<br />
$ sudo yum install docker-engine -y<br />
<br />
===Post-installation steps===<br />
* Check on the status of docker:<br />
$ sudo systemctl status docker<br />
<pre><br />
● docker.service - Docker Application Container Engine<br />
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)<br />
Active: active (running) since Tue 2016-07-12 12:31:08 PDT; 6s ago<br />
Docs: https://docs.docker.com<br />
Main PID: 3392 (docker)<br />
CGroup: /system.slice/docker.service<br />
├─3392 /usr/bin/docker daemon -H fd://<br />
└─3411 docker-containerd -l /var/run/docker/libcontainerd/docker-containerd.sock --runtime docker-runc --start-timeout 2m<br />
</pre><br />
<br />
* Make sure the docker service automatically starts after a machine reboot:<br />
$ sudo systemctl enable docker<br />
<br />
* Execute docker without <code>`sudo`</code>:<br />
$ sudo usermod -aG docker $(whoami)<br />
#~OR~<br />
$ sudo usermod -aG docker $USER<br />
Log out and log back in to use docker without <code>`sudo`</code>.<br />
<br />
* Check version of Docker installed:<br />
<pre><br />
$ docker version<br />
Client:<br />
Version: 17.05.0-ce<br />
API version: 1.29<br />
Go version: go1.7.5<br />
Git commit: 89658be<br />
Built: Thu May 4 22:10:54 2017<br />
OS/Arch: linux/amd64<br />
<br />
Server:<br />
Version: 17.05.0-ce<br />
API version: 1.29 (minimum version 1.12)<br />
Go version: go1.7.5<br />
Git commit: 89658be<br />
Built: Thu May 4 22:10:54 2017<br />
OS/Arch: linux/amd64<br />
Experimental: false<br />
</pre><br />
<br />
* Check that docker has been successfully installed and configured:<br />
$ docker run hello-world<br />
<pre><br />
...<br />
This message shows that your installation appears to be working correctly.<br />
...<br />
</pre><br />
<br />
As the above message shows, you now have a successful install of Docker on your machine and are ready to start building images and creating containers.<br />
<br />
==Miscellaneous==<br />
<br />
* Get the hostname of the host the Docker Engine is running on:<br />
$ docker info -f '<nowiki>{{ .Name }}</nowiki>'<br />
<br />
* Get the number of stopped containers:<br />
$ docker info --format '<nowiki>{{json .}}</nowiki>' | jq '.ContainersStopped'<br />
3<br />
<br />
* Get the number of images in the local registry:<br />
$ docker info --format '<nowiki>{{json .}}</nowiki>' | jq '.Images'<br />
92<br />
<br />
* Verify the Docker service is running:<br />
<pre><br />
$ curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping<br />
OK<br />
</pre><br />
<br />
* Show docker disk usage<br />
<pre><br />
$ docker system df<br />
TYPE TOTAL ACTIVE SIZE RECLAIMABLE<br />
Images 84 11 25.01GB 20.44GB (81%)<br />
Containers 20 0 768.1MB 768.1MB (100%)<br />
Local Volumes 16 2 2.693GB 2.628GB (97%)<br />
Build Cache 0 0 0B 0B<br />
</pre><br />
<br />
* Just ''just'' the version of Docker installed:<br />
<pre><br />
$ docker version --format '{{.Server.Version}}'<br />
20.10.7<br />
$ docker version --format '{{.Server.Version}}' 2>/dev/null || docker -v | awk '{gsub(/,/, "", $3); print $3}'<br />
20.10.7<br />
</pre><br />
<br />
==Install your own Docker private registry==<br />
''Note: I will use CentOS 7 for this install and assume you already have docker and docker-compose installed (see above).''<br />
<br />
For this install, I will assume you have a domain name registered somewhere. I will use <code>docker.example.com</code> as my example domain. Replace anywhere you see that below with your actual domain name.<br />
<br />
* Install dependencies:<br />
$ yum install -y nginx # used for the registry endpoint<br />
$ yum install -y httpd-tools # for the htpasswd utility<br />
<br />
* Setup docker registry directory structure:<br />
$ mkdir -p /opt/docker-registry/{data,nginx{/conf.d,/certs},log}<br />
$ cd /opt/docker-registry<br />
<br />
* Create a docker-compose file:<br />
$ vim docker-compose.yml # and add the following:<br />
<br />
<pre><br />
nginx:<br />
image: "nginx:1.9"<br />
ports:<br />
- 5043:443<br />
links:<br />
- registry:registry<br />
volumes:<br />
- ./log/nginx/:/var/log/nginx:rw<br />
- ./nginx/conf.d:/etc/nginx/conf.d:ro<br />
- ./nginx/certs:/etc/nginx/certs:ro<br />
registry:<br />
image: registry:2<br />
ports:<br />
- 127.0.0.1:5000:5000<br />
environment:<br />
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data<br />
volumes:<br />
- ./data:/data<br />
</pre><br />
<br />
* Create an Nginx configuration file:<br />
$ vim /opt/docker-registry/nginx/conf.d/registry.conf # and add the following:<br />
<br />
<pre><br />
upstream docker-registry {<br />
server registry:5000;<br />
}<br />
<br />
server {<br />
listen 443;<br />
server_name docker.example.com;<br />
<br />
# SSL<br />
ssl on;<br />
ssl_certificate /etc/nginx/certs/docker.example.com.crt;<br />
ssl_certificate_key /etc/nginx/certs/docker.example.com.key;<br />
<br />
# disable any limits to avoid HTTP 413 for large image uploads<br />
client_max_body_size 0;<br />
<br />
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)<br />
chunked_transfer_encoding on;<br />
<br />
location /v2/ {<br />
# Do not allow connections from docker 1.5 and earlier<br />
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents<br />
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {<br />
return 404;<br />
}<br />
<br />
proxy_pass http://docker-registry;<br />
proxy_set_header Host $http_host; # required for docker client's sake<br />
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP<br />
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;<br />
proxy_set_header X-Forwarded-Proto $scheme;<br />
proxy_read_timeout 900;<br />
<br />
add_header 'Docker-Distribution-Api-Version:' 'registry/2.0' always;<br />
<br />
# To add basic authentication to v2 use auth_basic setting plus add_header<br />
auth_basic "Restricted access to Docker Registry";<br />
auth_basic_user_file /etc/nginx/conf.d/registry.htpasswd;<br />
}<br />
}<br />
</pre><br />
<br />
$ cd /opt/docker-registry/nginx/conf.d<br />
$ htpasswd -c registry.htpasswd <username> # replace <username> with your actual username<br />
$ htpasswd registry.htpasswd <username2> # [optional] add a 2nd user<br />
<br />
* Setup your own certificate signing authority (for use with SSL):<br />
<br />
$ cd /opt/docker-registry/nginx/certs<br />
<br />
* Generate a new root key:<br />
<br />
$ openssl genrsa -out docker-registry-CA.key 2048<br />
<br />
* Generate a root certificate (enter anything you like at the prompts):<br />
<br />
$ openssl req -x509 -new -nodes -key docker-registry-CA.key -days 3650 -out docker-registry-CA.crt<br />
<br />
Then generate a key for your server (this is the file referenced by <code>ssl_certificate_key</code> in the Nginx configuration above):<br />
<br />
$ openssl genrsa -out docker.example.com.key 2048<br />
<br />
Now we have to make a certificate signing request (CSR). After you type the following command, OpenSSL will prompt you to answer a few questions. Enter anything you like for the first few, however, when OpenSSL prompts you to enter the "Common Name", make sure to enter the domain or IP of your server.<br />
<br />
$ openssl req -new -key docker.example.com.key -out docker.example.com.csr<br />
<br />
* Sign the certificate request:<br />
<br />
$ openssl x509 -req -in docker.example.com.csr -CA docker-registry-CA.crt -CAkey docker-registry-CA.key -CAcreateserial -out docker.example.com.crt -days 3650<br />
<br />
* Force any clients that will use the certificate authority we created above to accept that it is a "legitimate" certificate. Run the following commands on the Docker registry server and on any hosts that will be communicating with the Docker registry server:<br />
<br />
$ sudo cp /opt/docker-registry/nginx/certs/docker-registry-CA.crt /usr/local/share/ca-certificates/<br />
$ sudo update-ca-trust<br />
<br />
* Restart the Docker daemon in order for it to pick up the changes to the certificate store:<br />
<br />
$ sudo systemctl restart docker.service<br />
<br />
* Bring up the associated Docker containers:<br />
$ docker-compose up -d<br />
<br />
* Your Docker registry directory structure should look like the following:<br />
<pre><br />
$ cd /opt/docker-registry && tree .<br />
.<br />
├── data<br />
├── docker-compose.yml<br />
├── log<br />
│ └── nginx<br />
│ ├── access.log<br />
│ └── error.log<br />
└── nginx<br />
├── certs<br />
│ ├── docker-registry-CA.crt<br />
│ ├── docker-registry-CA.key<br />
│ ├── docker-registry-CA.srl<br />
│ ├── docker.example.com.crt<br />
│ ├── docker.example.com.csr<br />
│ └── docker.example.com.key<br />
└── conf.d<br />
├── registry.conf<br />
└── registry.htpasswd<br />
</pre><br />
<br />
* To access the private Docker registry from a client machine (any machine, really), first add the SSL certificate you created earlier to the client machine:<br />
<br />
$ cat /opt/docker-registry/nginx/certs/docker-registry-CA.crt # copy contents<br />
# On client machine:<br />
$ sudo vim /usr/local/share/ca-certificates/docker-registry-CA.crt # paste contents<br />
$ sudo update-ca-certificates # You should see "1 added" in the output<br />
<br />
* Restart Docker on the client machine to make sure it reloads the system's CA certificates:<br />
<br />
$ sudo service docker restart<br />
<br />
* Test that you can reach your private Docker registry:<br />
$ curl -k <nowiki>https://USERNAME:PASSWORD@docker.example.com:5043/v2/</nowiki><br />
{} # <- proper output<br />
<br />
* Now, test that you can login with Docker:<br />
$ docker login <nowiki>https://docker.example.com:5043</nowiki><br />
<br />
If that returns with "Login Succeeded", your private Docker registry is up and running!<br />
<br />
'''This section is incomplete. It will be updated when I have time.'''<br />
<br />
==Docker environment variables==<br />
''Note: See [https://docs.docker.com/engine/reference/commandline/cli/#environment-variables here] for the most up-to-date list of environment variables.''<br />
<br />
The following list of environment variables are supported by the docker command line:<br />
<br />
;<code>DOCKER_API_VERSION</code> : The API version to use (e.g., 1.19)<br />
;<code>DOCKER_CONFIG</code> : The location of your client configuration files.<br />
;<code>DOCKER_CERT_PATH</code> : The location of your authentication keys.<br />
;<code>DOCKER_DRIVER</code> : The graph driver to use.<br />
;<code>DOCKER_HOST</code> : Daemon socket to connect to.<br />
;<code>DOCKER_NOWARN_KERNEL_VERSION</code> : Prevent warnings that your Linux kernel is unsuitable for Docker.<br />
;<code>DOCKER_RAMDISK</code> : If set this will disable "pivot_root".<br />
;<code>DOCKER_TLS_VERIFY</code> : When set Docker uses TLS and verifies the remote.<br />
;<code>DOCKER_CONTENT_TRUST</code> : When set Docker uses notary to sign and verify images. Equates to <code>--disable-content-trust=false</code> for build, create, pull, push, run.<br />
;<code>DOCKER_CONTENT_TRUST_SERVER</code> : The URL of the Notary server to use. This defaults to the same URL as the registry.<br />
;<code>DOCKER_TMPDIR</code> : Location for temporary Docker files.<br />
<br />
Because Docker is developed using "Go", one can also use any environment variables used by the "Go" runtime. In particular, the following might be useful:<br />
<br />
;<code>HTTP_PROXY</code><br />
;<code>HTTPS_PROXY</code><br />
;<code>NO_PROXY</code><br />
<br />
* Example usage:<br />
$ export DOCKER_API_VERSION=1.19<br />
<br />
==References==<br />
<references/><br />
<br />
==External links==<br />
* [https://www.docker.com/ Official website]<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:DevOps]]<br />
[[Category:Linux Command Line Tools]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Travel_Log&diff=8249Category:Travel Log2022-11-27T05:20:38Z<p>Christoph: /* Flights */</p>
<hr />
<div>This category will be my, as yet, unorganised '''Travel Log''' to many places around the world. (Note: The following is very much an ''incomplete'' travel log.)<br />
<br />
== Auto ==<br />
<br />
===Berlin trip (2006)===<br />
* Monaco &rarr; Milano &rarr; Ljubljana &rarr; Rotterdam &rarr; Berlin &rarr; Copenhagen &rarr; Monaco: April 2006<br />
: [http://triptracker.net/trip/1165/ TripTracker]<br />
: 1-Apr-2006 (14h20): Monaco &rarr; Milano<br />
: 2-Apr-2006 (23h30): Milano &rarr; Ljubljana<br />
: 3-Apr-2006 &ndash; 5-Apr-2006: Slovenia (Ljubljana, Novo Mesto, Kranj, Postojna, Jesenice, etc.)<br />
: 5-Apr-2006 (12h30): |&larr; Austria (Villach)<br />
: 5-Apr-2006 (15h15): |&larr; Germany<br />
: 5-Apr-2006 (19h15): Stuttgart<br />
: 5-Apr-2006 (20h20): Karlsruhe<br />
: 5-Apr-2006 (23h30): Köln<br />
: 5-Apr-2006 (00h10): |&larr; The Netherlands<br />
: 5-Apr-2006 (02h00): Rotterdam<br />
: 7-Apr-2006 (12h00): |&rarr; Rotterdam<br />
: 7-Apr-2006 (14h45): |&larr; Germany<br />
: 7-Apr-2006 (17h00): Hannover<br />
: 7-Apr-2006 (18h30): Magdeburg<br />
: 7-Apr-2006 (20h00): Berlin<br />
: 8-Apr-2006 (15h30): |&rarr; Berlin<br />
: 8-Apr-2006 (18h00): Rostock<br />
: 8-Apr-2006 (19h30): Ferry (|&rarr; Germany from Rostock Harb.)<br />
: 8-Apr-2006 (21h15): Ferry (|&larr; Denmark at Gedsen)<br />
: 8-Apr-2006 (23h20): København<br />
: 9-Apr-2006 (06h30): |&rarr; København<br />
: 9-Apr-2006 (09h00): Ferry (|&rarr; Denmark from Gedsen)<br />
: 9-Apr-2006 (11h00): Ferry (|&larr; Germany at Rostock Harb.)<br />
: 9-Apr-2006 (13h30): |&larr; Berlin<br />
: 9-Apr-2006 (14h00): |&rarr; Berlin<br />
: 9-Apr-2006 (15h50): Dresden<br />
:10-Apr-2006 (00h45): |&larr; Slovenia<br />
:10-Apr-2006 (01h40): Ljubljana<br />
:10-Apr-2006 (02h40): Postojna<br />
:10-Apr-2006 (13h15): |&larr; Italy<br />
:10-Apr-2006 (15h00): Padova<br />
:10-Apr-2006 (15h40): Verona<br />
:10-Apr-2006 (18h50): Genova<br />
:10-Apr-2006 (20h35): |&larr; France<br />
:10-Apr-2006 (20h45): |&larr; Monaco<br />
<br />
===Canada trip (2001)===<br />
''Note: The total trip covered 11,893 km (7,390 miles).''<br />
*Corvallis, OR &rarr; Boston, MA &rarr; Quebec &rarr; Ontario &rarr; Manitoba &rarr; Saskatchewan &rarr; Alberta &rarr; British Columbia &rarr; Corvallis, OR<br />
** 01-Sep-2001 (??h??): |&rarr; Corvallis, OR<br />
** 06-Sep-2001 (15h45): |&larr; Massachusetts<br />
** 13-Sep-2001 (13h15): |&rarr; Westborough, MA<br />
** 13-Sep-2001 (17h46): Augusta, ME<br />
** 13-Sep-2001 (18h15): |&larr; CANADA (into Quebec)<br />
** 14-Sep-2001 (02h06): Grande Allee Est., Quebec<br />
** 14-Sep-2001 (15h01): Cap-Madeleine, PQ<br />
** 15-Sep-2001 (17h44): Thunder Bay, ON<br />
** 14-Sep-2001 (17h45): |&larr; Ontario<br />
** 14-Sep-2001 (20h03): Cobden, ON<br />
** 15-Sep-2001 (12h02): Sudbury, ON<br />
** 15-Sep-2001 (10h25): Wawa, ON<br />
** 15-Sep-2001 (22h01): Kenora, ON<br />
** 15-Sep-2001 (10h37): |&larr; Manitoba<br />
** 16-Sep-2001 (10h53): Brandon, MB<br />
** 16-Sep-2001 (12h50): |&larr; Saskatchewan<br />
** 16-Sep-2001 (16h09): Herbert, SK<br />
** 16-Sep-2001 (18h06): |&larr; Alberta<br />
** 16-Sep-2001 (23h00): |&larr; British Columbia<br />
** 17-Sep-2001 (00h30): |&larr; USA (into Idaho)<br />
** 17-Sep-2001 (03h36): Coeur d'Alene, ID<br />
** 17-Sep-2001 (05h30): |&larr; Oregon<br />
<br />
===Ireland trip (1999-2000)===<br />
* 26-Dec-1999 (??h??): Dublin, Ireland<br />
* 26-Dec-1999 (16h13): Lord Edward St., Dublin<br />
* 27-Dec-1999 (??h??): Kinlay House, Christchurch, 2-12 Lord Edward St., Dublin, Ireland<br />
* 2?-Dec-1999 (??h??): Kilkenny<br />
* 28-Dec-1999 (12h27): Patrick St., Cork<br />
* 28-Dec-1999 (17h12): Mallow, Co. Cork<br />
* 29-Dec-1999 (??h??): Co. Kerry<br />
* ??-Dec-1999 (??h??): Saratoga House (Bed & Breakfast), Muckross Road, Killarney, Ireland<br />
* 29-Dec-1999 (15h09): Chapel St., Limerick<br />
* 29-Dec-1999 (15h18): Eimear<br />
* 30-Dec-1999 (??h??): Ballybofey<br />
* 30-Dec-1999 (15h51): Greysteel<br />
* 30-Dec-1999 (??h??): O'Connell St., Sligo<br />
* 30-Dec-1999 (??h??): Petra, Galway<br />
* 30-Dec-1999 (??h??): Sligo<br />
* 30-Dec-1999 (??h??): The Linen House Backpackers Hostel, 18-20 Kent Street, Belfast, Ireland<br />
* 01-Jan-2000 (14h46): Arthur Sq., Belfast<br />
* 02-Jan-2000 (06h34): Dublin Airport<br />
<br />
===Miscellaneous (Europe)===<br />
* Budapest, Hungary &rarr; Dubrovnik, Croatia: June/July 2018 (round-trip)<br />
* ''The Cliffs of Møn'', DK: Oct-2005<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Chiemsee, Germany: Oct-1996 (round-trip)<br />
* Zagreb, Croatia &rarr; Ljubjlana, Slovenia &rarr; Graz, Austria &rarr; Budapest, Hungary: Sep-1996<br />
* Zagreb, Croatia &rarr; Ljubljana, Slovenia: Sep-1996 (round-trip)<br />
* Budapest, Hungary &rarr; Zagreb, Croatia: Sep-1996<br />
* Budapest, Hungary &rarr; Vienna, Austria &rarr; Salzburg, Austria &rarr; Berchtesgaden, Germany &rarr; Innsbruck, Austria &rarr; Liechtenstein &rarr; Switzerland: Aug-1996 (round-trip)<br />
* Warsaw, Poland &rarr; Budapest, Hungary: September 1994<br />
* Budapest, Hungary &rarr; Slovakia (11-Nov-1993) &rarr; Warsaw, Poland: November 1993<br />
* Vienna, Austria &rarr; Budapest, Hungary: 28-Sep-1993<br />
<br />
===Miscellaneous (South America)===<br />
* Cuenca, Ecuador &rarr; Riobamba, Ecuador &rarr; Ambato, Ecuador &rarr; Quito, Ecuador: 1993 (round-trip)<br />
* Quito, Ecuador &#187; Ipiales, Colombia: 1993 (round-trip)<br />
* Guayaquil, Ecuador &rarr; Santo Domingo de Los Colorados, Ecuador &rarr; Quito, Ecuador: 1993<br />
* Guayaquil, Ecuador &rarr; Salinas, Ecuador: 1993 (round-trip)<br />
* Tumbes, Peru &rarr; Guayaquil, Ecuador: 21-Dec-1992<br />
<br />
===Miscellaneous (North America)===<br />
* Seattle, WA &#187; Winthrop, WA &#187; Leavenworth, WA &#187; Issaquah, WA &#187; Seattle, WA: June 2022<br />
* Seattle, WA &#187; Winthrop, WA &#187; Tiger, WA &#187; Spokane, WA &#187; Seattle, WA: May 2022 (1,200 km/744 mi)<br />
* Seattle, WA &#187; Portland, OR &#187; Grants Pass, OR &#187; Crescent City, CA &#187; Redwood National Forest &#187; Newport, OR &#187; Astoria, OR &#187; Elma, WA &#187; Seattle, WA: November 2021 (1,881 km/1,169 mi)<br />
* Seattle, WA &#187; Mt Saint Helens &#187; Mt Adams &#187; Stonehenge Memorial &#187; Multnomah Falls &#187; Seattle, WA: September 2021 (914 km/568 mi)<br />
* Seattle, WA &#187; Walla Walla, OR &#187; Joseph, OR &#187; Lewiston, ID &#187; Grand Coulee, WA &#187; Seattle, WA: June 2021 (1,421 km/883 mi)<br />
* Seattle, WA &#187; Pendleton, OR &#187; Craters of the Moon National Monument & Preserve &#187; Idaho Springs, ID &#187; Jackson, WY &#187; Grand Teton National Park &#187; Yellowstone National Park &#187; Missoula, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: September 2020 (2,746 km/1,706 mi)<br />
* Seattle, WA &#187; Coeur d'Alene, ID &#187; Missoula, MT &#187; Glacier National Park, MT &#187; Seattle, WA: July 2019 (1,984 km/1,233 mi)<br />
* Seattle, WA &#187; Corvallis, OR: November 2018 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2017 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2016 (round-trip)<br />
* Seattle, WA &#187; Corvallis, OR: November 2015 (round-trip)<br />
* Texas &#187; Oklahoma &#187; Kansas &#187; Nebraska &#187; South Dakota &#187; Wyoming &#187; Montana &#187; Idaho &#187; Seattle, WA: September 2015 (4,000 km/4,290 mi)<br />
* Seattle, WA &#187; Oregon &#187; Idaho &#187; Utah &#187; Wyoming &#187; Colorado &#187; Kansas &#187; Oklahoma &#187; Texas: 11-16 May 2013<br />
* Seattle, WA &#187; Port Angeles, WA &#187; Hurricane Ridge, WA: 28-Dec-2012 (round-trip)<br />
* Seattle, WA &#187; Portland, OR: 4-Dec-2012 (round-trip)<br />
* Chicago, IL &#187; Milwaukee, WI &#187; Minneapolis, MN &#187; Fargo, ND &#187; Billings, MT &#187; Coeur d'Alene, ID &#187; Seattle, WA: 25-26 June 2012 (3,357 km/2,086 mi)<br />
* St. Louis, MO &#187; Chicago, IL: 31-Dec-2011<br />
* Chicago, IL &#187; St. Louis, MO: 5-Jul-2011<br />
* Milwaukee, WI &#187; Chicago, IL: 30-Jun-2011<br />
* Pittsburgh, PA &#187; New York City, NY: April 2005 (round-trip)<br />
* Pittsburgh, PA &#187; Bethlehem, PA &#187; Westborough, MA &#187; New York City, NY: December 2004 (round-trip)<br />
* Pittsburgh, PA &#187; Boston, MA: November 2004 (round-trip)<br />
* Corvallis, OR &#187; Salt Lake City, UT &#187; Houston, TX &#187; Atlanta, GA &#187; Pittsburgh, PA: September 2004<br />
* Corvallis, OR &#187; Boston, MA: 2001, 2002 (round-trip)<br />
* Corvallis, OR &#187; Vancouver, BC, Canada (round-trip)<br />
* Corvallis, OR &#187; Tijuana, Mexico: 7-Sep-1999 (round-trip)<br />
* Los Angeles, CA &#187; Corvallis, OR: January 1998<br />
* Houston, TX &#187; Milwaukee, WI &#187; Menominee, MI: May 1995 (round-trip)<br />
<br />
== Bus / Train / Ferry ==<br />
===Spain trip (2006)===<br />
* Monaco &#187; Cannes &#187; Marseille &#187; Montpellier St-Ro &#187; Barcelona; April 2006 (round-trip)<br />
** 24-Apr-06 18h35: |&rarr; Nice, France [SNCF train]<br />
** 24-Apr-06 19h00: Antibes, FR<br />
** 24-Apr-06 19h07: Cannes, FR<br />
** 24-Apr-06 19h30: B. sur-Mer, FR<br />
** 24-Apr-06 19h39: San Raphael-Valescure, FR<br />
** 24-Apr-06 20h14: Les Arcs-Drag., FR<br />
** 24-Apr-06 20h56: Toulon, FR<br />
** 24-Apr-06 21h35: Marseille, FR<br />
** 25-Apr-06 15h05: |&rarr; Marseille, FR<br />
** 25-Apr-06 16h16: Nîmes, FR<br />
** 25-Apr-06 17h21: Montpellier St-Ro, FR<br />
** 25-Apr-06 18h42: Béziers, FR<br />
** 25-Apr-06 19h35: Perpignan, FR<br />
** 25-Apr-06 20h15: Portbou, Spain (ES) [''border'']<br />
** 25-Apr-06 22h30: Barcelona, ES<br />
** 27-Apr-06 19h24: |&rarr; Barcelona, ES [Renfe train]<br />
** 27-Apr-06 22h05: Cerbere, FR [''border'']<br />
** 28-Apr-06 08h37: Nice, FR<br />
** 28-Apr-06 10h00: Monaco<br />
<br />
===Miscellaneous (Europe)===<br />
* Tallinn, Estonia &rarr; Helsinki, Finland: January 2020 (round-trip)<br />
* Lisbon, Portugal &rarr; Porto, Portugal: Nov-2016 (round-trip)<br />
* København, DK &#187; Berlin, D: 09-Apr-2006 [+Ferry]<br />
* Berlin, D &#187; København, DK: 08-Apr-2006 (15h15) [+Ferry]<br />
* Ljubljana, Slovenia &#187; Villach HBF, Austria: 18-Aug-1997<br />
* Stockholm C &#187; Oslo S: 15-Aug-1997 (SJ train)<br />
* Salzburg, Austria &#187; Ljubljana, Slovenia: 25-Aug-1997 (&#214;sterreichische Bundesbahnen train (&#214;BB))<br />
* Haslev, DK &#187; Næstved, DK: 24-Aug-1997 (DSB train)<br />
* København &#187; Stockholm C: 14-Aug-1997 (DSB train)<br />
* Oslo S &#187; Bergen: 16-Aug-1997<br />
* Næstved, DK &#187; Rødby Færge, DK: 24-Aug-1997<br />
* Salzburg HBF &#187; Villach HBF (&uuml;ber Schwarzach-St. veit Bad Gastein): 25-Aug-1997 (&#214;BB train)<br />
* Oslo S &#187; Trondheim: 18-Aug-1997<br />
* Grensen (Scandinavia): 16-Aug-1997<br />
* Abisko Turiststation - STF: 20-Aug-1997<br />
* Abisko Turiststation - STF: 21-Aug-1997<br />
* Germany: 24-Aug-1997 (DB train)<br />
* Stockholm S:T Eriksgatan: 15-Aug-1997<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Jun-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: Mar-1997 (round-trip)<br />
* Ljubljana, Slovenia &rarr; Budapest, Hungary: (28-Nov-1997/30-Nov-1997) (round-trip)<br />
* Budapest, Hungary &rarr; Ljubljana, Slovenia: 8-Nov-1996<br />
* Budapest, Hungary &rarr; Slovakia: 18-Aug-1995 (round-trip)<br />
* Budapest, Hungary &rarr; Vienna, Austria: 9-Feb-1995 (round-trip)<br />
* Moscow, Russia &rarr; Warsaw, Poland: Sep-1994<br />
* Moscow, Russia &rarr; Brest, Belarus: Aug-1994 (round-trip)<br />
* Moscow, Russia &rarr; Minsk, Belarus: Jul-1994 (round-trip)<br />
* Warsaw, Poland &#187; Moscow, Russia: Jun-1994<br />
* Warsaw, Poland &rarr; Vilnius, Lithuania &rarr; Riga, Latvia: (12-Jan-1994/??-Jan-1994) (round-trip)<br />
<br />
===Miscellaneous (South America)===<br />
* Arequipa, Peru &rarr; Lima, Peru: 1992<br />
* Arequipa, Peru &rarr; Iquique, Chile: (17-Jul-1992/20-Jul-1992) (round-trip)<br />
* Lima, Peru &rarr; Arequipa, Peru: 1992<br />
* Lima, Peru &rarr; La Paz, Bolivia: (19-May-1991/6-Jun-1991) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (29-Nov-1990/11-Dec-1990) (round-trip)<br />
* Lima, Peru &rarr; Quito, Ecuador: (6-Jul-1990/20-Jul-1990) (round-trip)<br />
<br />
==Flights==<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2022 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): August 2022 [RT]<br />
* Kiev, Ukraine (KBP) ✈ Frankfurt, Germany (FRA) ✈ Seattle, WA (SEA): December 2021<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ Frankfurt, Germany (FRA) ✈ Kiev, Ukraine (KBP): December 2021<br />
* Seattle, WA (SEA) ✈ Houston, TX (IAH): November 2021 [RT]<br />
* Memphis, TN (MEM) ✈ Atlanta, GA (ATL) ✈ Seattle, WA (SEA): June 2021<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC) ✈ Memphis, TN (MEM): June 2021<br />
* Seattle, WA (SEA) ✈ Milwaukee, WI (MKE): May 2021 [RT]<br />
* Tallinn, Estonia (TLL) ✈ Stockholm, Sweden (ARN) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): January 2020<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD) ✈ København, DK (CPH) ✈ Helsinki, Finland (HEL) ✈ Tallinn, Estonia (TLL): December 2019<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): October 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Miami, FL (MIA): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Francisco, CA (SFO): September 2019 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): August 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Denver, CO (DEN): May 2019 [RT]<br />
* Seattle, WA (SEA) ✈ Charlotte, NC (CLT): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Santa Ana, CA (SNA): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Chicago, IL (ORD): October 2018 [RT]<br />
* Seattle, WA (SEA) ✈ San Jose, CA (SJC): September 2018 [RT]<br />
* Budapest, Hungary (BUD) ✈ Brussels, Belgium (BRU) ✈ Newark, New Jersey (EWR) ✈ Seattle, WA (SEA): July 2018<br />
* Seattle, WA (SEA) ✈ Toronto, Canada (YYZ) ✈ Budapest, Hungary (BUD): June 2018<br />
* Seattle, WA (SEA) ✈ Reno, NV (RNO): May 2018 [RT]<br />
* Seattle, WA (SEA) ✈ Reykjavík, Iceland (RKV): December 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Kona, Hawaii (KOA): September 2017 [RT]<br />
* Seattle, WA (SEA) ✈ Salt Lake City, UT (SLC): August 2017 [RT]<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): November 2016<br />
* Lisbon, Portugal ✈ Amsterdam, NL (AMS): November 2016<br />
* Paris, FR (CGD) ✈ Lisbon, Portugal: November 2016<br />
* Seattle, WA (SEA) ✈ Paris, FR (CDG): November 2016<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): November 2016 [RT]<br />
* Seattle, WA (SEA) ✈ Las Vegas, NV (LAS): June 2016 [RT]<br />
* Houston, TX (IAH) ✈ Seattle, WA (SEA): September 2015 [RT]<br />
* Houston, TX (IAH) ✈ San Francisco, CA (SFO): August 2015 [RT]<br />
* Houston, TX (IAH) ✈ Madison, WI (MSN): March 2015 [RT]<br />
* Houston, TX (IAH) ✈ Amsterdam, NL (AMS): March 2015 [RT]<br />
* Seattle, WA (SEA) ✈ Milwaukee (MKE): June 2011<br />
* Seattle, WA (SEA) ✈ Phoenix, AZ (PHX) ✈ Chicago, IL (ORD): October 2010 [RT]<br />
* Seattle, WA (SEA) ✈ Los Angeles, CA (LAX): December 2007 [RT]<br />
* København, DK (CPH) ✈ Seattle, WA (SEA): June 2006<br />
* Heathrow, UK ✈ København, DK (CPH): June 2006<br />
* Nice, FR ✈ Heathrow, UK: June 2006<br />
* København, DK (CPH) ✈ Nice, FR (NCE): February 2006<br />
* Washington Dulles ✈ København, DK: August 2005<br />
* Pittsburgh, PA (PIT) ✈ Washington Dulles: August 2005<br />
* Portland, OR (PDX) ✈ Pittsburgh, PA (PIT): Summer 2004 [RT]<br />
* Eugene, OR ✈ Houston, TX (IAH): February 2002 [RT]<br />
* Portland, OR (PDX) ✈ Boston, MA: December 2002 [RT]<br />
* Seattle, WA (SEA) ✈ Portland, OR (PDX): January 2000<br />
* Amsterdam, NL (AMS) ✈ Seattle, WA (SEA): January 2000<br />
* Dublin, Ireland ✈ Amsterdam, NL (AMS): January 2000<br />
* Amsterdam (AMS) ✈ Dublin, Ireland: December 1999<br />
* Seattle, WA (SEA) ✈ Amsterdam, NL (AMS): December 1999<br />
* Portland, OR (PDX) ✈ Seattle, WA (SEA): December 1999<br />
* Chicago (ORD) ✈ Los Angeles (LAX): December 1997<br />
* Greenbay, WI (GRB) ✈ Chicago (ORD): December 1997<br />
* Chicago (ORD) ✈ Greenbay, WI (GRB): December 1997<br />
* Rome, Italy (FCO) ✈ Chicago, IL (ORD): December 1997<br />
* Trieste, Italy (TRS) ✈ Rome, Italy (FCO): December 1997<br />
* Houston, TX (IAH) ✈ Budapest, Hungary (BUD): July 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: June 1996<br />
* Narita, Japan ✈ Los Angeles (LAX) ✈ Houston, TX: March 1996 [RT]<br />
* Narita, Japan ✈ Taipei, Taiwan: December 1995 [RT]<br />
* Los Angeles, CA (LAX) ✈ Narita, Japan: October 1995<br />
* Houston, TX (IAH) ✈ Los Angeles (LAX): October 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): September 1995<br />
* Budapest, Hungary (BUD) ✈ Houston, TX (IAH): May 1995 [RT]<br />
* Paris, FR (CGD) ✈ Vienna, Austria: September 1993<br />
* Quito, Ecuador ✈ Caracas, Venezuela (CCS) ✈ Paris, France: 1993<br />
* Lima, Peru ✈ Tumbes, Peru: December 1992<br />
* Boston, MA ✈ Miami, FL ✈ Lima, Peru: <br />
* Amsterdam, NL (AMS) ✈ Chicago, IL (ORD): <br />
* Boston, MA ✈ Amsterdam, NL (AMS):<br />
<br />
== Individual Places ==<br />
=== Ireland ===<br />
* Dublin<br />
** '''Dublin''' (Baile &Ntilde;tha Cliath)<br />
* Kildare<br />
** Naas<br />
* Laois<br />
* Carlow<br />
** Carlow (Ceatharlach)<br />
** Royal Oak<br />
* Kilkenny<br />
** '''Kilkenny''' (Cill Chainnigh)<br />
** Callan<br />
* Tipperary<br />
** Glenbower<br />
** Clonmel (Cluian Meala)<br />
** Cahir<br />
** Burncourt<br />
* Cork<br />
** Fermoy<br />
** '''Cork''' (Coroaigh)<br />
** Fota<br />
** Cobh (An C&oacute;bh)<br />
** '''Blarney'''<br />
** Macroom<br />
** Ballyvourney<br />
* Kerr<br />
** ''Derrynasaggart Mts''<br />
** Poulgorm Br<br />
** '''Killarney''' (Cill Airne)<br />
** Farranfore<br />
* Limerick<br />
** Abbeyfeale<br />
** ''Mullaghareirk Mts''<br />
** Newcastle West<br />
** Croagh<br />
** '''Limerick''' (Luimneach)<br />
* Clare<br />
** Bunratty<br />
** Ennis (Inis)<br />
** Ennistymon<br />
** Liscannor<br />
** ''Cliffs of Moher''<br />
** Doolin<br />
** Lisdoonvarna<br />
** Ballyvaughan<br />
** Bealaclugga<br />
** Burren<br />
* Galway<br />
** Kinvarra<br />
** Ballinderreen<br />
** Oranmore<br />
** '''Galway''' (Gaillimh)<br />
** Claregalway<br />
** Tuam<br />
* Mayo<br />
** Claremorris<br />
** Cloonfallagh<br />
** Charlestown<br />
* Sligo<br />
** Curry<br />
** Tubbercurry<br />
** Collooney<br />
** '''Sligo''' (Sligeach)<br />
** ''Dartry Mts''<br />
* Leitrim<br />
* Donegal<br />
** Bundoran<br />
** Ballyshannon<br />
** Donegal (D&uacute;n na nGall)<br />
** Ballybofey<br />
** Clady<br />
* Tyrone<br />
** '''Strabane''' (Northern Ireland)<br />
* Londonderry<br />
** Derry (Londonderry)<br />
** Eglinton<br />
** Ballykelly<br />
** Limavady<br />
** Coleraine<br />
* Antrim<br />
** Derrykelghan<br />
** Moss-side<br />
** Ballycastle<br />
** ''Antrim Hills''<br />
** Ballintoy<br />
** ''Carrick-a-Rede Rope Bridge''<br />
** ''Giants Causeway''<br />
** Craignamaddy<br />
** Ballymoney<br />
** Ballymena<br />
** Antrim<br />
** ''Lough Neagh'' (lake)<br />
** Dunadry<br />
** Newtownabbey<br />
** '''Belfast'''<br />
* Down<br />
** Lisburn<br />
** Banbridge<br />
* Armagh<br />
** Newry<br />
* Louth<br />
** Dundalk (Dun Dealgan)<br />
** Dunleen<br />
** Drogheda (Droichead Atha)<br />
* Meath<br />
** Julianstown<br />
* Dublin<br />
** Balbriggan<br />
** Swords<br />
<br />
[[Category:World Travels]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Books&diff=8248Category:Books2022-11-23T04:50:05Z<p>Christoph: /* Titles (completed) */</p>
<hr />
<div>My love of books runs deep. I try to read for at least an hour every day (books unrelated to my studies). This category will contain a list of the books I have read or [[Summer Reading List|am reading]].<br />
<br />
==Titles (completed)==<br />
''Note: These are a list of books I have read in their entirety. This is nowhere near a complete list and the following list is in no particular order.''<br />
<br />
#'''''From Dawn to Decadence: 1500 to the Present: 500 Years of Western Cultural Life''''' &mdash; by Jacques Barzun<br />
#'''''The Invention of Science: The Scientific Revolution from 1500 to 1750''''' &mdash; by David Wootton<br />
#'''''Predictably Irrational: The Hidden Forces That Shape Our Decisions''''' &mdash; by Dan Ariely (2008)<br />
#'''''The Tyranny of Experts: Economists, Dictators, and the Forgotten Rights of the Poor''''' &mdash; by William Easterly<br />
#'''''The Origins of Political Order: From Prehuman Times to the French Revolution''''' &mdash; by Francis Fukuyama<br />
#'''''Political Order and Political Decay: From the Industrial Revolution to the Globalization of Democracy''''' &mdash; by Francis Fukuyama<br />
#'''''Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World''''' &mdash; by Bruce Schneier<br />
#'''''Superintelligence: Paths, Dangers, Strategies''''' &mdash; by Nick Bostrom<br />
#'''''Smashing Physics''''' &mdash; by Jon Butterworth<br />
#'''''The History of the Ancient World: From the Earliest Accounts to the Fall of Rome''''' &mdash; by Susan Wise Bauer<br />
#'''''The History of the Medieval World: From the Conversion of Constantine to the First Crusade''''' &mdash; by Susan Wise Bauer<br />
#'''''The History of the Renaissance World: From the Rediscovery of Aristotle to the Conquest of Constantinople''''' &mdash; by Susan Wise Bauer<br />
#'''''The Well Educated Mind: A Guide to the Classical Education You Never Had''''' &mdash; by Susan Wise Bauer<br />
#'''''The Story of Western Science: From the Writings of Aristotle to the Big Bang Theory''''' &mdash; by Susan Wise Bauer (2015)<br />
#'''''Countdown to Zero Day''''' &mdash; by Kim Zetter<br />
#'''''The Revenge of Geography''''' &mdash; by Robert D. Kaplan<br />
#'''''The Master of Disguise''''' &mdash; by Antonio J. Mendez<br />
#'''''To Explain the World: The Discovery of Modern Science''''' &mdash; by Steven Weinberg (2015)<br />
#'''''The Fall of the Roman Empire''''' &mdash; by Peter Heather<br />
#'''''The Shadow Factory''''' &mdash; by James Bamford<br />
#'''''Operation Shakespeare''''' &mdash; by John Shiffman<br />
#'''''No Place to Hide''''' &mdash; by Glenn Greenwald<br />
#'''''Neanderthal Man: In Search of Lost Genomes''''' &mdash; by Svante Pääbo (2014)<br />
#'''''Constantine the Emperor''''' &mdash; by David Potter<br />
#'''''A Troublesome Inheritance''''' &mdash; by Nicholas Wade<br />
#'''''The Selfish Gene''''' &mdash; by Richard Dawkins<br />
#'''''The 4-Hour Workweek: Escape 9-5, Live Anywhere, and Join the New Rich''''' &mdash; by [http://www.fourhourworkweek.com/blog/about/ Timothy Ferriss] (2007)<br />
#'''''Hackers: Heroes of the Computer Revolution''''' &mdash; by Steven Levy<br />
#'''''Wealth, Poverty, and Politics: An International Perspective''''' &mdash; Thomas Sowell<br />
#'''''The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win''''' &mdash; by Gene Kim, Kevin Behr, George Spafford<br />
#'''''Paper: Paging Through History''''' &mdash; by Mark Kurlansky<br />
#'''''Salt: A World History''''' &mdash; by Mark Kurlansky<br />
#'''''Guns, Germs, and Steel: The Fates of Human Societies''''' &mdash; by Jared Diamond (1997)<br />
#'''''Collapse: How Societies Choose to Fail or Succeed''''' &mdash; by Jared Diamond (2005)<br />
#'''''The Better Angels of Our Nature: Why Violence Has Declined''''' &mdash; by Steven Pinker<br />
#'''''How to Win Friends & Influence People''''' &mdash; by Dale Carnegie (1936)<br />
#'''''[[The True Believer: Thoughts on the Nature of Mass Movements]]''''' &mdash; Eric Hoffer (1951)<br />
#'''''An Economic History of the World since 1400''''' &mdash; by Professor Donald J. Harreld<br />
#'''''The End of the Cold War 1985-1991''''' &mdash; by Robert Service<br />
#'''''Iron Kingdom: The Rise and Downfall of Prussia, 1600-1947''''' &mdash; by Christopher Clark<br />
#'''''[https://www.goodreads.com/book/show/12158480-why-nations-fail Why Nations Fail: The Origins of Power, Prosperity, and Poverty]''''' &mdash; by Daron Acemoğlu and James A. Robinson (2012)<br />
#'''''The Six Wives of Henry VIII''''' &mdash; by Alison Weir (1991)<br />
#'''''The Demon-Haunted World: Science as a Candle in the Dark''''' &mdash; by Carl Sagan (1996)<br />
#'''''Dark Territory: The Secret History of Cyber War''''' &mdash; by Fred Kaplan (2016)<br />
#'''''A Brief History of Britain 1066-1485''''' &mdash; by Nicholas Vincent (2012)<br />
#'''''The History of Science: 1700-1900''''' &mdash; by Professor Frederick Gregory (2003)<br />
#'''''Heart of Europe: A History of the Holy Roman Empire''''' &mdash; by Peter H. Wilson (2016)<br />
#'''''[[The Story of Civilization]] - Volume 2: The Life of Greece''''' &mdash; by Will Durant (1939)<br />
#'''''The Story of Civilization - Volume 3: Caesar and Christ''''' &mdash; by Will Durant (1944)<br />
#'''''The Story of Civilization - Volume 4: The Age of Faith''''' &mdash; by Will Durant (1950)<br />
#'''''Red Sparrow''''' &mdash; by Jason Matthews (2013)<br />
#'''''Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time''''' &mdash; by Dava Sobel (1995)<br />
#'''''The Medici: Power, Money, and Ambition in the Italian Renaissance''''' &mdash; by Paul Strathern (2016)<br />
#'''''The Venetians: A New History: From Marco Polo to Casanova''''' &mdash; by Paul Strathern (2013)<br />
#'''''The Rise of Athens: The Story of the World's Greatest Civilization''''' &mdash; by Anthony Everitt (2016)<br />
#'''''Red Mars''''' &mdash; by Kim Stanley Robinson (1993)<br />
#'''''The Clockwork Universe: Isaac Newton, The Royal Society, and the Birth of the Modern World''''' &mdash; by Edward Dolnick (2011)<br />
#'''''The Skeptics' Guide to the Universe: How to Know What's Really Real in a World Increasingly Full of Fake''''' &mdash; by Steven Novella (2018)<br />
#'''''New Thinking: From Einstein to Artificial Intelligence, the Science and Technology That Transformed Our World''''' &mdash; by Dagogo Altraide (2019)<br />
#'''''Flashpoints: The Emerging Crisis in Europe''''' &mdash; by George Friedman (2015)<br />
#'''''The War on Science: Who's Waging It, Why It Matters, What We Can Do About It''''' &mdash; by Shawn Lawrence Otto (2016)<br />
#'''''Permanent Record''''' &mdash; by Edward Snowden (2019)<br />
#'''''Mythos: The Greek Myths Reimagined''''' &mdash; by Stephen Fry (2019)<br />
#'''''Heroes: The Greek Myths Reimagined''''' &mdash; by Stephen Fry (2020)<br />
#'''''Troy: The Greek Myths Reimagined''''' &mdash; by Stephen Fry (2021)<br />
#'''''I Contain Multitudes: The Microbes Within Us and a Grander View of Life''''' &mdash; by Ed Yong (2016)<br />
#'''''How to Read a Book''''' &mdash; by Mortimer J. Adler and Charles Van Doren (1940)<br />
#'''''The Order: A Novel''''' &mdash; by Daniel Silva (2020)<br />
#'''''How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need''''' &mdash; by Bill Gates (2020)<br />
#'''''The Horse, the Wheel, and Language: How Bronze-Age Riders from the Eurasian Steppes Shaped the Modern World''''' &mdash; by David W. Anthony (2007)<br />
#'''''The Map of Knowledge: A Thousand-Year History of How Classical Ideas Were Lost and Found''''' &mdash; by Violet Moller (2019)<br />
#'''''Sapiens: A Brief History of Humankind''''' &mdash; by Yuval Noah Harari (2015)<br />
#'''''The Ascent of Money: A Financial History of the World''''' &mdash; by Niall Ferguson (2008)<br />
#'''''Civilization: The West and the Rest''''' &mdash; by Niall Ferguson (2011)<br />
#'''''Empire: How Britain Made the Modern World''''' &mdash; by Niall Ferguson (2017)<br />
#'''''The Square and the Tower: Networks and Power, from the Freemasons to Facebook''''' &mdash; by Niall Ferguson (2018)<br />
#'''''The House of Rothschild, Volume 1: Money's Prophets: 1798-1848''''' &mdash; by Niall Ferguson (2019)<br />
#'''''Doom: The Politics of Catastrophe''''' &mdash; by Niall Ferguson (2021)<br />
#'''''The Accidental Superpower: The Next Generation of American Preeminence and the Coming Global Disorder''''' &mdash; by Peter Zeihan (2014)<br />
#'''''The Strange Death of Europe: Immigration, Identity, Islam''''' &mdash; by Douglas Murray (2017)<br />
#'''''The War on the West''''' &mdash; by Douglas Murray (2022)<br />
#'''''12 Rules for Life: An Antidote to Chaos''''' &mdash; by Jordan B. Peterson (2018)<br />
#'''''The Historian''''' &mdash; by Elizabeth Kostova (2009)<br />
#'''''The Battle of Bretton Woods: John Maynard Keynes, Harry Dexter White, and the Making of a New World Order''''' &mdash; by Benn Steil (2013)<br />
<br />
==Titles (textbooks)==<br />
''Note: These are some of the textbooks I not only read in their entirety whilst in university, but studied them thoroughly. This is very much an incomplete list.''<br />
<br />
#'''''X-ray Structure Determination''''' &mdash; by Stout and Jensen<br />
#'''''Inferring Phylogenies''''' &mdash; by Joseph Felsenstein, Sinauer Associates, Inc. (2003)<br />
#'''''A Biologist's Guide to Analysis of DNA Microarray Data'''''<br />
#'''''Molecular Cell Biology''''' &mdash; by Scott MP, Matsudaira P, Lodish H, Darnell J, Zipursky L, Kaiser CA, Berk A, and Krieger M. W. H. Freeman, 5th Edition (2003)<br />
#'''''Guide to Analysis of DNA Microarray Data''''' &mdash; by Knudsen S, 2nd Edition (2004)<br />
#'''''General Chemistry''''' &mdash; by Darrell D. Ebbing and Steven D. Gammon, Houghton Mifflin Company, Boston, 6th Edition (1999)<br />
#'''''Organic Chemistry''''' &mdash; by Paula Yurkanis Bruice, Prentice Hall, New Jersey, 3rd Edition (2001)<br />
#'''''Principles and Techniques for an Integrated Chemistry Laboratory''''' &mdash; by David A. Aikens, ''et. al.'', Waveland Press, Inc., Prospect Heights (1984)<br />
#'''''Physical Chemistry''''' &mdash; by Peter Atkins and Julio de Paula, W.H. Freeman and Company, New York, 7th Edition (2002)<br />
#'''''Biochemistry''''' &mdash; by Christopher K. Mathews, K. E. van Holde, and Kevin G. Ahern, Addison Wesley Longman, San Fransisco, 3rd Edition (2000)<br />
#'''''Biology''''' &mdash; by Neil A. Campbell, The Benjamin/Cummings Publishing Company, Inc., Redwood City, 5th Edition (1999)<br />
#'''''Essential Cell Biology''''' &mdash; by Bruce Alberts, ''et. al.'', Garland Publishing, Inc. New York (1998)<br />
#'''''Genetics: From Genes to Genomes''''' &mdash; by Leland H. Hartwell, ''et. al.'', McGraw-Hill Companies, Inc. Boston (2000)<br />
#'''''Evolution: An Introduction''''' &mdash; by Stephen C. Stearns and Rolf F. Hoekstra, Oxford University Press, Oxford (2000)<br />
#'''''Physics for Scientists and Engineers''''' &mdash; by Saunders College Publishing, Philadelphia, 5th Edition (2000)<br />
#'''''Physical Biochemistry''''' &mdash; by Kensal E. van Holde, W. Curtis Johnson, and P. Shing Ho, Prentice Hall, New Jersey (1998)<br />
#'''''Object-Oriented Software Development Using Java''''' &mdash; by Xiaoping Jia, Addison-Wesley, 2nd Edition<br />
#'''''Calculus''''' &mdash; by James Stewart<br />
#'''''Calculus: Early Transcendentals''''' &mdash; by James Stewart<br />
#'''''Single Variable Calculus: Early Transcendentals''''' &mdash; by James Stewart<br />
<br />
==Titles (uncategorized)==<br />
''Note: These are some of my favourite books that I have read. I have read others, but these stood out to me. This does not mean, in any way, that I necessarily agree with everything these books have to say; they just interested me.''<br />
#'''''The History of the Decline and Fall of the Roman Empire''''' &mdash; by Edward Gibbon (1776-1788) [http://www.gutenberg.org/browse/authors/g#a375][http://en.wikipedia.org/wiki/Outline_of_The_History_of_the_Decline_and_Fall_of_the_Roman_Empire]<br />
#'''''The House of Intellect''''' &mdash; by Jacques Barzun<br />
#'''''[http://librivox.org/thus-spake-zarathustra-by-friedrich-nietzsche/ Also sprach Zarathustra]''''' ("Thus Spoke Zarathustra") &mdash; by Friedrich Nietzsche (1883-5)<br />
#'''''Jenseits von Gut und Böse''''' ("Beyond Good and Evil") &mdash; by Friedrich Nietzsche (1886)<br />
#'''''Zur Genealogie der Moral''''' ("On the Genealogy of Morals") &mdash; by Friedrich Nietzsche (1887)<br />
#'''''Götzen-Dämmerung''''' ("Twilight of the Idols") &mdash; by Friedrich Nietzsche (1888)<br />
#'''''[http://librivox.org/the-antichrist-by-nietzsche/ Der Antichrist]''''' ("The Antichrist") &mdash; by Friedrich Nietzsche (1888)<br />
#'''''Ecce Homo''''' &mdash; by Friedrich Nietzsche (1888)<br />
#'''''Vom Nutzen und Nachtheil der Historie für das Leben '''''("On the Use and Abuse of History for Life") &mdash; by Friedrich Nietzsche (1874)<br />
#'''''Die Traumdeutung''''' ("The Interpretation of Dreams") &mdash; by Sigmund Freud (1899)<br />
#'''''Das Ich und das Es''''' ("The Ego and the Id") &mdash; by Sigmund Freud (1923)<br />
#'''''Die Zukunft einer Illusion''''' ("The Future of an Illusion") &mdash; by Sigmund Freud (1927) <br />
#'''''Das Unbehagen in der Kultur''''' ("Civilization and Its Discontents") &mdash; by Sigmund Freud (1929)<br />
#'''''[[:wikipedia:A History of the English-Speaking Peoples|A History of the English-Speaking Peoples]]''''' &mdash; by Winston Churchill (1956–58)<br />
#'''''The Notebooks of Don Rigoberto''''' &mdash; by Mario Vargas Llosa<br />
#'''''Die Waffen nieder!''''' ("Lay Down Your Arms!") &mdash; Baroness Bertha von Suttner (1889)<br />
#'''''Europe's Optical Illusion''''' (also: "The Great Illusion") &mdash; Sir Norman Angell (1909)<br />
#'''''Night''''' &mdash; by Elie Wiesel (1960)<br />
#'''''The End of Faith: Religion, Terror, and the Future of Reason''''' &mdash; by Sam Harris<br />
#'''''The Lexus and the Olive Tree: Understanding Globalization''''' &mdash; by Thomas L. Friedman<br />
#'''''The World Is Flat: A Brief History of the Twenty-first Century''''' &mdash; Thomas L. Friedman<br />
#'''''The Case For Goliath: How America Acts As The World's Government in the Twenty-first Century''''' &mdash; by Michael Mandelbaum<br />
#'''''Caesar's Commentaries: On the Gallic War And on the Civil War''''' &mdash; by Julius Caesar<br />
#'''''Cem Escovadas Antes de Ir para Cama''''' ("One Hundred Strokes of the Brush before Bed") &mdash; by Melissa Panarello<br />
#'''''Coryat's Crudities: Hastily gobled up in Five Moneth's Travels''''' &mdash; by Thomas Coryat (1611)<br />
#'''''Italian Hours''''' &mdash; by Henry James (1909)<br />
#'''''Italienische Reise''''' ("Italian Journey") &mdash; by Johann Wolfgang von Goethe (1816/1817).<br />
#'''''Diarios de motocicleta''''' ("The Motorcycle Diaries") &mdash; by Che Guevara (1951).<br />
#'''''The Prince of Tides''''' &mdash; by Pat Conroy (1986).<br />
#'''''Il Nome Della Rosa''''' ("The Name of the Rose") &mdash; by Umberto Eco (1980).<br />
#'''''Il Pendolo di Foucault''''' ("Foucault's Pendulum") &mdash; by Umberto Eco (1988).<br />
#'''''The Book of the Courtier''''' ("Il Cortegiano") &mdash; by Baldassare Castiglione (1528) [http://en.wikipedia.org/wiki/Sprezzatura].<br />
#'''''One Hundred Years of Solitude''''' &mdash; by Gabriel Garcia Marquez<br />
#'''''The Unbearable Lightness of Being: A Novel''''' &mdash; by Milan Kundera<br />
#'''''The Book of Laughter and Forgetting''''' &mdash; by Milan Kundera<br />
#'''''Masters of Rome''''' (series) &mdash; by Colleen McCullough<br />
#'''''The Wishing Game''''' &mdash; by Patrick Redmond<br />
#'''''The Measure Of All Things: The Seven-Year Odyssey and Hidden Error That Transformed the World''''' &mdash; by By Ken Alder (2002)<br />
#'''''De la démocratie en Amérique''''' ("On Democracy in America") &mdash; by Alexis de Tocqueville (1835)<br />
#'''''The Anatomy of Revolution''''' &mdash; by Crane Brinton (1938)<br />
#'''''God and Gold: Britain, America, and the Making of the Modern World''''' &mdash; by Walter Russell Mead (2007)<br />
#'''''Black Mass: Apocalyptic Religion and the Death of Utopia''''' &mdash; by John Gray (2007)<br />
#'''''The Grand Chessboard: American Primacy and Its Geostrategic Imperatives''''' &mdash; by Zbigniew Brzezinski (1998)<br />
#'''''Kim''''' &mdash; by Rudyard Kipling (1901)<br />
#'''''The Lotus and the Wind''''' &mdash; by John Masters<br />
<br />
==Authors (uncategorized)==<br />
*[[wikipedia:Aldous Huxley|Aldous Huxley]] &mdash; [[Wikiquote:Aldous Huxley]]<br />
*[[wikipedia:Edgar Allen Poe|Edgar Allen Poe]] &mdash; [[Wikiquote:Edgar Allen Poe]]<br />
*[[wikipedia:Oscar Wilde|Oscar Wilde]] &mdash; [[Wikiquote:Oscar Wilde]]<br />
*[[wikipedia:George Orwell|George Orwell]] &mdash; [[Wikiquote:George Orwell]]<br />
*[[wikipedia:William Shakespeare|William Shakespeare]] &mdash; [[Wikiquote:William Shakespeare]]<br />
*[[wikipedia:Thomas Jefferson|Thomas Jefferson]] &mdash; [[Wikiquote:Thomas Jefferson]]<br />
*[[wikipedia:Mark Antony|Mark Antony]] &mdash; [[Wikiquote:Mark Antony]]<br />
*[[wikipedia:Jane Austen|Jane Austen]] &mdash; [[Wikiquote:Jane Austen]] ([http://en.wikipedia.org/wiki/Free_indirect_speech])<br />
*[[wikipedia:Albert Einstein|Albert Einstein]] &mdash; [[Wikiquote:Albert Einstein]]<br />
*[[Friedrich Nietzsche]] &mdash; [[Wikiquote:Friedrich Nietzsche]]<br />
*[[wikipedia:Sigmund Freud|Sigmund Freud]] &mdash; [[Wikiquote:Sigmund Freud]]<br />
*[[wikipedia:Plato|Plato]] &mdash; [[Wikiquote:Plato]]<br />
*[[wikipedia:Aristotle|Aristotle]] &mdash; [[Wikiquote:Aristotle]]<br />
*[[wikipedia:Baruch Spinoza|Baruch Spinoza]] (Benedictus de Spinoza; 1632–1677) &mdash; [[Wikiquote:Baruch Spinoza]]<br />
*[[wikipedia:Georg Wilhelm Friedrich Hegel|Georg Wilhelm Friedrich Hegel]] &mdash; [[Wikiquote:Georg Wilhelm Friedrich Hegel]]<br />
*[[wikipedia:Niccolò Machiavelli|Niccolò Machiavelli]] &mdash; [[Wikiquote:Niccolò Machiavelli]]<br />
*[[wikipedia:Immanuel Kant|Immanuel Kant]] &mdash; [[Wikiquote:Immanuel Kant]]<br />
*[[wikipedia:Lord Byron|Lord Byron]] (George Gordon Byron, 6th Baron Byron) &mdash; [[Wikiquote:Lord Byron]]<br />
*[[wikipedia:Mary Shelley|Mary Shelley]] &mdash; [[Wikiquote:Mary Shelley]]<br />
*[[wikipedia:Percy Bysshe Shelley|Percy Bysshe Shelley]] &mdash; [[Wikiquote:Percy Bysshe Shelley]]<br />
*[[wikipedia:Christopher Marlowe|Christopher Marlowe]] (1564–1593): English dramatist and poet. &mdash; [[Wikiquote:Christopher Marlowe]]<br />
*[[wikipedia:Francis Bacon|Francis Bacon]] &mdash; [[Wikiquote:Francis Bacon]]<br />
*[[wikipedia:Eric Hoffer|Eric Hoffer]] &mdash; [[Wikiquote:Eric Hoffer]]<br />
*[[wikipedia:Milton Friedman|Milton Friedman]] &mdash; [[Wikiquote:Milton Friedman]]<br />
*[[wikipedia:Roger Bacon|Roger Bacon]] (c. 1214-1294) &mdash; [[wikiquote:Roger Bacon]]<br />
*[[wikipedia:Charles Baudelaire|Charles Baudelaire]] (1821-1867) &mdash; [[wikiquote:Charles Baudelaire]]<br />
<br />
=== Authors (I have not read yet) ===<br />
* [[wikipedia:Simone De Beauvoir|Simone De Beauvoir]] (1908–1986): French existentialist, writer, and social essayist. (Author of ''The Necessity of Atheism'' [http://www.spartacus.schoolnet.co.uk/PRshelley.htm].)<br />
* [[wikipedia:Jeremy Bentham|Jeremy Bentham]] (1748–1832): British jurist, eccentric, philosopher and social reformer, founder of utilitarianism. He had [[wikipedia:John Stuart Mill|John Stuart Mill]] as his disciple. (Quoted as saying "The spirit of dogmatic theology poisons anything it touches". ~ [http://www.positiveatheism.org/hist/quotes/quote-b0.htm].)<br />
* [[wikipedia:Albert Camus|Albert Camus]] (1913–1960): French philosopher and novelist, a luminary of existentialism.<br />
* [[wikipedia:Auguste Comte|Auguste Comte]] (1798–1857): French philosopher, considered the father of sociology. (Quoted as saying "The heavens declare the glory of Kepler and Newton". ~ [http://www.positiveatheism.org/hist/quotes/quote-c3.htm].)<br />
* [[wikipedia:André Comte-Sponville|André Comte-Sponville]] (1952–): French materialist philosopher.<br />
* [[wikipedia:Baron d'Holbach|Paul Henry Thiry, Baron d'Holbach]] (1723–1789): French homme de lettres, philosopher and encyclopedist, member of the philosophical movement of French materialism, attacked Christianity and religion as counter to the moral advancement of humanity.<br />
* [[wikipedia:Marquis de Condorcet|Marquis de Condorcet]] (1743–1794): French philosopher and mathematician of the Enlightenment.<br />
* [[wikipedia:Daniel Dennett|Daniel Dennett]] (1942–): American philosopher, leading figure in evolutionary biology and cognitive science, well-known for his book ''[[wikipedia:Darwin's Dangerous Idea|Darwin's Dangerous Idea]]''.<br />
* [[wikipedia:Denis Diderot|Denis Diderot]] (1713–1784): French philosopher, author, editor of the first encyclopedia. Known for the quote "Man will never be free until the last king is strangled with the entrails of the last priest".<br />
* [[wikipedia:Ludwig Andreas Feuerbach|Ludwig Andreas Feuerbach]] (1804–1872): German philosopher, postulated that God is merely a projection by humans of their own best qualities.<br />
* [[wikipedia:Paul Kurtz|Paul Kurtz]] (1926–): American philosopher, skeptic, founder of Committee for the Scientific Investigation of Claims of the Paranormal (CSICOP) and the Council for Secular Humanism.<br />
* [[wikipedia:Karl Popper|Sir Karl Popper]] (1902–1994): Austrian-born British philosopher of science, who claimed that empirical falsifiability should be the criterion for distinguishing scientific theory from non-science.<br />
* [[wikipedia:Richard Rorty|Richard Rorty]] (1931–): American philosopher, whose ideas combine pragmatism with a [[wikipedia:Ludwig Wittgenstein|Wittgensteinian]] ontology that declares that meaning is a social-linguistic product of dialogue. He actually rejects the theist/atheist dichotomy and prefers to call himself "anti-clerical".<br />
* [[wikipedia:Bertrand Russell|Bertrand Russell, 3rd Earl Russell]], (1872–1970): British mathematician, philosopher, logician, political liberal, activist, popularizer of philosophy, and 1950 Nobel Laureate in Literature. On the issue of atheism/agnosticism, he wrote the essay "[[wikipedia:Why I Am Not a Christian|Why I Am Not a Christian]]".<br />
* [[wikipedia:Jean-Paul Sartre|Jean-Paul Sartre]] (1905–1980): French existentialist philosopher, dramatist, novelist and critic.<br />
* [[wikipedia:Peter Singer|Peter Singer]] (1946–): Australian philosopher and teacher, working on practical ethics from a utilitarian perspective, controversial for his opinions on abortion and euthanasia.<br />
* [[wikipedia:James Lovelock|James Lovelock]] (1919-) [[wikiquote:James Lovelock]]<br />
<br />
==External links==<br />
*[http://www.gutenberg.org/browse/scores/top Top 100 - Project Gutenberg]<br />
*[http://www.randomhouse.com/modernlibrary/100talkingpoints.html The Modern Library - 100 Best - Talking Points]<br />
*[http://www.randomhouse.com/modernlibrary/100bestnonfiction.html The Modern Library - 100 Best - Nonfiction]<br />
*[http://www.randomhouse.com/modernlibrary/100bestnovels.html The Modern Library - 100 Best - Novels]<br />
*[http://www.nytimes.com/pages/books/bestseller/ NY Times Best-Seller Lists]<br />
*[http://www.bookmooch.com/ BookMooch] &mdash; a free book trade and exchange community<br />
*[http://www.bookcrossing.com/ BookCrossing] &mdash; a free book club<br />
*[http://www.nndb.com/ Notable Names Database] (NNDB) &mdash; an online database of biographical details of notable people.<br />
*[http://wikisummaries.org/Main_Page WikiSummaries] &mdash; provides free book summaries<br />
*[http://www.fullbooks.com/ fullbooks.com]<br />
*[http://www.themodernword.com/eco/eco_writings.html Umberto Eco: His Own Writings]<br />
*[http://www.ulib.org/ UDL: Universal Digital Library] &mdash; has over 1.5 million books digitised.<br />
*[[wikipedia:List of historical novels]]<br />
<br />
{{stub}}</div>Christophhttp://wiki.christophchamp.com/index.php?title=Kubernetes&diff=8247Kubernetes2022-11-14T19:21:52Z<p>Christoph: /* Release history */</p>
<hr />
<div>'''Kubernetes''' (also known by its numeronym '''k8s''') is an open source container cluster manager. Kubernetes' primary goal is to provide a platform for automating deployment, scaling, and operations of application containers across a cluster of hosts. Kubernetes was released by Google on July 2015.<br />
<br />
* Get the latest stable release of k8s with:<br />
$ curl -sSL <nowiki>https://dl.k8s.io/release/stable.txt</nowiki><br />
<br />
==Release history==<br />
<br />
NOTE: There is no such thing as Kubernetes Long-Term-Support (LTS). There is a new "minor" release ''roughly'' every 3 months (note: changed to ''roughly'' every 4 months in 2020).<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="3" bgcolor="#EFEFEF" | '''Kubernetes release history'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Release<br />
!Date<br />
!Cadence (days)<br />
|- align="left"<br />
|1.0 || 2015-07-10 ||align="right"|<br />
|--bgcolor="#eeeeee"<br />
|1.1 || 2015-11-09 ||align="right"| 122<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.2.md 1.2] || 2016-03-16 ||align="right"| 128<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.3.md 1.3] || 2016-07-01 ||align="right"| 107<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.4.md 1.4] || 2016-09-26 ||align="right"| 87<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.5.md 1.5] || 2016-12-12 ||align="right"| 77<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.6.md 1.6] || 2017-03-28 ||align="right"| 106<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.7.md 1.7] || 2017-06-30 ||align="right"| 94<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.8.md 1.8] || 2017-09-28 ||align="right"| 90<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.9.md 1.9] || 2017-12-15 ||align="right"| 78<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md 1.10] || 2018-03-26 ||align="right"| 101<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md 1.11] || 2018-06-27 ||align="right"| 93<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.12.md 1.12] || 2018-09-27 ||align="right"| 92<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.13.md 1.13] || 2018-12-03 ||align="right"| 67<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.14.md 1.14] || 2019-03-25 ||align="right"| 112<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md 1.15] || 2019-06-17 ||align="right"| 84<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md 1.16] || 2019-09-18 ||align="right"| 93<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md 1.17] || 2019-12-09 ||align="right"| 82<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md 1.18] || 2020-03-25 ||align="right"| 107<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md 1.19] || 2020-08-26 ||align="right"| 154<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md 1.20] || 2020-12-08 ||align="right"| 104<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md 1.21] || 2021-04-08 ||align="right"| 121<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md 1.22] || 2021-08-04 ||align="right"| 118<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md 1.23] || 2021-12-07 ||align="right"| 125<br />
|- align="left"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md 1.24] || 2022-05-03 ||align="right"| 147<br />
|--bgcolor="#eeeeee"<br />
|[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md 1.25] || 2022-08-23 ||align="right"| 112<br />
|}<br />
</div><br />
<br clear="all"/><br />
See: [https://gravitational.com/blog/kubernetes-release-cycle The full-time job of keeping up with Kubernetes]<br />
<br />
==Providers and installers==<br />
<br />
* Vanilla Kubernetes<br />
* AWS:<br />
** Managed: EKS<br />
** Kops<br />
** Kube-AWS<br />
** Kismatic<br />
** Kubicorn<br />
** Stack Point Cloud<br />
* Google:<br />
** Managed: GKE<br />
** [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
** Stack Point Cloud<br />
** Typhoon<br />
* Azure AKS<br />
* Ubuntu UKS<br />
* VMware PKS<br />
* [[Rancher|Rancher RKE]]<br />
* CoreOS Tectonic<br />
<br />
==Design overview==<br />
Kubernetes is built through the definition of a set of components (building blocks or "primitives") which, when used collectively, provide a method for the deployment, maintenance, and scalability of container-based application clusters.<br />
<br />
These "primitives" are designed to be ''loosely coupled'' (i.e., where little to no knowledge of the other component definitions is needed to use) as well as easily extensible through an API. Both the internal components of Kubernetes as well as the extensions and containers make use of this API.<br />
<br />
==Components==<br />
The building blocks of Kubernetes are the following (note that these are also referred to as Kubernetes "Objects" or "API Primitives"):<br />
<br />
;Cluster : A cluster is a set of machines (physical or virtual) on which your applications are managed and run. All machines are managed as a cluster (or set of clusters, depending on the topology used).<br />
;Nodes (minions) : You can think of these as "container clients". These are the individual hosts (physical or virtual) that Docker is installed on and hosts the various containers within your managed cluster.<br />
: Each node will run etcd (a key-pair management and communication service, used by Kubernetes for exchanging messages and reporting on cluster status) as well as the Kubernetes Proxy.<br />
;Pods : A pod consists of one or more containers. Those containers are guaranteed (by the cluster controller) to be located on the same host machine (aka "co-located") in order to facilitate sharing of resources. For an example, it makes sense to have database processes and data containers as close as possible. In fact, they really should be in the same pod.<br />
: Pods "work together", as in a multi-tiered application configuration. Each set of pods that define and implement a service (e.g., MySQL or Apache) are defined by the label selector (see below).<br />
: Pods are assigned unique IPs within each cluster. These allow an application to use ports without having to worry about conflicting port utilization.<br />
: Pods can contain definitions of disk volumes or shares, and then provide access from those to all the members (containers) within the pod.<br />
: Finally, pod management is done through the API or delegated to a controller.<br />
;Labels : Clients can attach key-value pairs to any object in the system (e.g., Pods or Nodes). These become the labels that identify them in the configuration and management of them. The key-value pairs can be used to filter, organize, and perform mass operations on a set of resources.<br />
;Selectors : Label Selectors represent queries that are made against those labels. They resolve to the corresponding matching objects. A Selector expression matches labels to filter certain resources. For example, you may want to search for all pods that belong to a certain service, or find all containers that have a specific tier Label value as "database". Labels and Selectors are inherently two sides of the same coin. You can use Labels to classify resources and use Selectors to find them and use them for certain actions.<br />
: These two items are the primary way that grouping is done in Kubernetes and determine which components that a given operation applies to when indicated.<br />
;Controllers : These are used in the management of your cluster. Controllers are the mechanism by which your desired configuration state is enforced.<br />
: Controllers manage a set of pods and, depending on the desired configuration state, may engage other controllers to handle replication and scaling (Replication Controller) of X number of containers and pods across the cluster. It is also responsible for replacing any container in a pod that fails (based on the desired state of the cluster).<br />
: Replication Controllers (RC) are a subset of Controllers and are an abstraction used to manage pod lifecycles. One of the key uses of RCs is to maintain a certain number of running Pods (e.g., for scaling or ensuring that at least one Pod is running at all times, etc.). It is considered a "best practice" to use RCs to define Pod lifecycles, rather than creating Pods directly.<br />
: Other controllers that can be engaged include a ''DaemonSet Controller'' (enforces a 1-to-1 ratio of pods to Worker Nodes) and a ''Job Controller'' (that runs pods to "completion", such as in batch jobs).<br />
: Each set of pods any controller manages, is determined by the label selectors that are part of its definition.<br />
;Replica Sets: These define how many replicas of each Pod will be running. They also monitor and ensure the required number of Pods are running, replacing Pods that die. Replica Sets can act as replacements for Replication Controllers.<br />
;Services : A Service is an abstraction on top of Pods, which provides a single IP address and DNS name by which the Pods can be accessed. This load balancing configuration is much easier to manage and helps scale Pods seamlessly.<br />
: Kubernetes can then provide service discovery and handle routing with the static IP for each pod as well as load balancing (round-robin based) connections to that service among the pods that match the label selector indicated.<br />
: By default, although a service is only exposed inside a cluster, it can also be exposed outside a cluster, as needed.<br />
;Volumes : A Volume is a directory with data, which is accessible to a container. The volume co-terminates with the Pods that encloses it.<br />
;Name : A name by which a resource is identified.<br />
;Namespace : A Namespace provides additional qualification to a resource name. This is especially helpful when multiple teams/projects are using the same cluster and there is a potential for name collision. You can think of a Namespace as a virtual wall between multiple clusters.<br />
;Annotations : An Annotation is a Label, but with much larger data capacity. Typically, this data is not readable by humans and is not easy to filter through. Annotation is useful only for storing data that may not be searched, but is required by the resource (e.g., storing strong keys, etc.).<br />
;Control Pane<br />
;API<br />
<br />
===Pods===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/ Pod]'' is the smallest and simplest Kubernetes object. It is the unit of deployment in Kubernetes, which represents a single instance of the application. A Pod is a logical collection of one or more containers, which:<br />
<br />
* are scheduled together on the same host;<br />
* share the same network namespace; and<br />
* mount the same external storage (Volumes).<br />
<br />
Pods are ephemeral in nature, and they do not have the capability to self-heal by themselves. That is why we use them with controllers, which can handle a Pod's replication, fault tolerance, self-heal, etc. Examples of controllers are ''Deployments'', ''ReplicaSets'', ''ReplicationControllers'', etc. We attach the Pod's specification to other objects using Pod Templates (see below).<br />
<br />
===Labels===<br />
Labels are key-value pairs that can be attached to any Kubernetes object (e.g. ''Pods''). Labels are used to organize and select a subset of objects, based on the requirements in place. Many objects can have the same label(s). Labels do not provide uniqueness to objects. <br />
<br />
===Label Selectors===<br />
With Label Selectors, we can select a subset of objects. Kubernetes supports two types of Selectors:<br />
<br />
;Equality-Based Selectors : Equality-Based Selectors allow filtering of objects based on label keys and values. With this type of Selector, we can use the <code>=</code>, <code>==</code>, or <code>!=</code> operators. For example, with <code>env==dev</code>, we are selecting the objects where the "<code>env</code>" label is set to "<code>dev</code>".<br />
;Set-Based Selectors : Set-Based Selectors allow filtering of objects based on a set of values. With this type of Selector, we can use the <code>in</code>, <code>notin</code>, and <code>exist</code> operators. For example, with <code>env in (dev,qa)</code>, we are selecting objects where the "<code>env</code>" label is set to "<code>dev</code>" or "<code>qa</code>".<br />
<br />
===Replication Controllers===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/ ReplicationController]'' (rc) is a controller that is part of the Master Node's Controller Manager. It makes sure the specified number of replicas for a Pod is running at any given point in time. If there are more Pods than the desired count, the ReplicationController would kill the extra Pods, and, if there are less Pods, then the ReplicationController would create more Pods to match the desired count. Generally, we do not deploy a Pod independently, as it would not be able to re-start itself if something goes wrong. We always use controllers like ReplicationController to create and manage Pods.<br />
<br />
===Replica Sets===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ ReplicaSet]'' (rs) is the next-generation ReplicationController. ReplicaSets support both equality- and set-based Selectors, whereas ReplicationControllers only support equality-based Selectors. As of January 2018, this is the only difference.<br />
<br />
As an example, say you create a ReplicaSet where you defined a "desired replicas = 3" (and set "<code>current==desired</code>"), any time "<code>current!=desired</code>" (i.e., one of the Pods dies) the ReplicaSet will detect that the current state is no longer matching the desired state. So, in our given scenario, the ReplicaSet will create one more Pod, thus ensuring that the current state matches the desired state.<br />
<br />
ReplicaSets can be used independently, but they are mostly used by Deployments to orchestrate the Pod creation, deletion, and updates. A Deployment automatically creates the ReplicaSets, and we do not have to worry about managing them.<br />
<br />
===Deployments===<br />
''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' objects provide declarative updates to Pods and ReplicaSets. The DeploymentController is part of the Master Node's Controller Manager, and it makes sure that the current state always matches the desired state.<br />
<br />
As an example, let's say we have a Deployment which creates a "ReplicaSet A". ReplicaSet A then creates 3 Pods. In each Pod, one of the containers uses the <code>nginx:1.7.9</code> image.<br />
<br />
Now, in the Deployment, we change the Pod's template and we update the image for the Nginx container from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>. As we have modified the Pod's template, a new "ReplicaSet B" gets created. This process is referred to as a "Deployment rollout". (A rollout is only triggered when we update the Pod's template for a deployment. Operations like scaling the deployment do not trigger the deployment.) Once ReplicaSet B is ready, the Deployment starts pointing to it.<br />
<br />
On top of ReplicaSets, Deployments provide features like Deployment recording, with which, if something goes wrong, we can rollback to a previously known state.<br />
<br />
===Namespaces===<br />
If we have numerous users whom we would like to organize into teams/projects, we can partition the Kubernetes cluster into sub-clusters using ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces]''. The names of the resources/objects created inside a Namespace are unique, but not across Namespaces.<br />
<br />
To list all the Namespaces, we can run the following command:<br />
$ kubectl get namespaces<br />
NAME STATUS AGE<br />
default Active 2h<br />
kube-public Active 2h<br />
kube-system Active 2h<br />
<br />
Generally, Kubernetes creates two default namespaces: <code>kube-system</code> and <code>default</code>. The <code>kube-system</code> namespace contains the objects created by the Kubernetes system. The <code>default</code> namespace contains the objects which belong to any other namespace. By default, we connect to the <code>default</code> Namespace. <code>kube-public</code> is a special namespace, which is readable by all users and used for special purposes, like bootstrapping a cluster. <br />
<br />
Using ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ Resource Quotas]'', we can divide the cluster resources within Namespaces.<br />
<br />
===Component services===<br />
The component services running on a standard master/worker node(s) Kubernetes setup are as follows:<br />
* Kubernetes Master node(s)<br />
*; kube-apiserver : Exposes Kubernetes APIs<br />
*; kube-controller-manager : Runs controllers to handle nodes, endpoints, etc.<br />
*; kube-scheduler : Watches for new pods and assigns them nodes<br />
*; etcd : Distributed key-value store<br />
*; DNS : [optional] DNS for Kubernetes services<br />
* Worker node(s)<br />
*; kubelet : Manages pods on a node, volumes, secrets, creating new containers, health checks, etc.<br />
*; kube-proxy : Maintains network rules, port forwarding, etc.<br />
<br />
==Setup a Kubernetes cluster==<br />
<br />
<div style="margin: 10px; padding: 5px; border: 2px solid red;">'''IMPORTANT''': The following is how to setup Kubernetes 1.2 that is, as of January 2018, a very old version. I will update this article with how to setup k8s using a much newer version (v1.9) when I have time.<br />
</div><br />
<br />
In this section, I will show you how to setup a Kubernetes cluster with etcd and Docker. The cluster will consist of 1 master node and 3 worker nodes.<br />
<br />
===Setup VMs===<br />
<br />
For this demo, I will be creating 4 VMs via [[Vagrant]] (with VirtualBox).<br />
<br />
* Create Vagrant demo environment:<br />
$ mkdir $HOME/dev/kubernetes && cd $_<br />
<br />
* Create Vagrantfile with the following contents:<br />
<pre><br />
# -*- mode: ruby -*-<br />
# vi: set ft=ruby :<br />
<br />
require 'yaml'<br />
VAGRANTFILE_API_VERSION = "2"<br />
<br />
$common_script = <<COMMON_SCRIPT<br />
# Set verbose<br />
set -v<br />
# Set exit on error<br />
set -e<br />
echo -e "$(date) [INFO] Starting modified Vagrant..."<br />
sudo yum update -y<br />
# Timestamp provision<br />
date > /etc/vagrant_provisioned_at<br />
COMMON_SCRIPT<br />
<br />
unless defined? CONFIG<br />
configuration_file = File.join(File.dirname(__FILE__), 'vagrant_config.yml')<br />
CONFIG = YAML.load(File.open(configuration_file, File::RDONLY).read)<br />
end<br />
<br />
CONFIG['box'] = {} unless CONFIG.key?('box')<br />
<br />
def modifyvm_network(node)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--nicpromisc1", "allow-all"]<br />
#vbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]<br />
vbox.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]<br />
end<br />
end<br />
<br />
def modifyvm_resources(node, memory, cpus)<br />
node.vm.provider "virtualbox" do |vbox|<br />
vbox.customize ["modifyvm", :id, "--memory", memory]<br />
vbox.customize ["modifyvm", :id, "--cpus", cpus]<br />
end<br />
end<br />
<br />
## START: Actual Vagrant process<br />
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|<br />
<br />
config.vm.box = CONFIG['box']['name']<br />
<br />
# Uncomment the following line if you wish to be able to pass files from<br />
# your local filesystem directly into the vagrant VM:<br />
#config.vm.synced_folder "data", "/vagrant"<br />
<br />
## VM: k8s master #############################################################<br />
config.vm.define "master" do |node|<br />
node.vm.hostname = "k8s.master.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
#node.vm.network "forwarded_port", guest: 80, host: 8080<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['master']<br />
<br />
# Uncomment the following if you wish to define CPU/memory:<br />
#node.vm.provider "virtualbox" do |vbox|<br />
# vbox.customize ["modifyvm", :id, "--memory", "4096"]<br />
# vbox.customize ["modifyvm", :id, "--cpus", "2"]<br />
#end<br />
#modifyvm_resources(node, "4096", "2")<br />
end<br />
## VM: k8s minion1 ############################################################<br />
config.vm.define "minion1" do |node|<br />
node.vm.hostname = "k8s.minion1.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion1']<br />
end<br />
## VM: k8s minion2 ############################################################<br />
config.vm.define "minion2" do |node|<br />
node.vm.hostname = "k8s.minion2.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion2']<br />
end<br />
## VM: k8s minion3 ############################################################<br />
config.vm.define "minion3" do |node|<br />
node.vm.hostname = "k8s.minion3.dev"<br />
node.vm.provision "shell", inline: $common_script<br />
node.vm.network "private_network", ip: CONFIG['host_groups']['minion3']<br />
end<br />
###############################################################################<br />
<br />
end<br />
</pre><br />
<br />
The above Vagrantfile uses the following configuration file:<br />
$ cat vagrant_config.yml<br />
<pre><br />
---<br />
box:<br />
name: centos/7<br />
storage_controller: 'SATA Controller'<br />
debug: false<br />
development: false<br />
network:<br />
dns1: 8.8.8.8<br />
dns2: 8.8.4.4<br />
internal:<br />
network: 192.168.200.0/24<br />
external:<br />
start: 192.168.100.100<br />
end: 192.168.100.200<br />
network: 192.168.100.0/24<br />
bridge: wlan0<br />
netmask: 255.255.255.0<br />
broadcast: 192.168.100.255<br />
host_groups:<br />
master: 192.168.200.100<br />
minion1: 192.168.200.101<br />
minion2: 192.168.200.102<br />
minion3: 192.168.200.103<br />
</pre><br />
<br />
* In the Vagrant Kubernetes directory (i.e., <code>$HOME/dev/kubernetes</code>), run the following command:<br />
$ vagrant up<br />
<br />
===Setup hosts===<br />
''Note: Run the following commands/steps on all hosts (master and minions).''<br />
<br />
* Log into the k8s master host:<br />
$ vagrant ssh master<br />
<br />
* Kubernetes cluster<br />
$ cat << EOF >> /etc/hosts<br />
192.168.200.100 k8s.master.dev<br />
192.168.200.101 k8s.minion1.dev<br />
192.168.200.102 k8s.minion2.dev<br />
192.168.200.103 k8s.minion3.dev<br />
EOF<br />
<br />
* Install, enable, and start NTP:<br />
$ yum install -y ntp<br />
$ systemctl enable ntpd && systemctl start ntpd<br />
$ timedatectl<br />
<br />
* Disable any [[iptables|firewall rules]] (for now; we will add the rules back later):<br />
$ systemctl stop firewalld && systemctl disable firewalld<br />
$ systemctl stop iptables<br />
<br />
* Disable [[SELinux]] (for now; we will turn it on again later):<br />
$ setenforce 0<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/sysconfig/selinux<br />
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
$ sestatus<br />
<br />
* Add the Docker repo and update yum:<br />
$ cat << EOF > /etc/yum.repos.d/virt7-docker-common-release.repo<br />
[virt7-docker-common-release]<br />
name=virr7-docker-common-release<br />
baseurl=<nowiki>http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/</nowiki><br />
gpgcheck=0<br />
EOF<br />
$ yum update<br />
<br />
* Install Docker, Kubernetes, and etcd:<br />
$ yum install -y --enablerepo=virt7-docker-common-release kubernetes docker etcd<br />
<br />
===Install and configure master controller===<br />
''Note: Run the following commands on only the master host.''<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/etcd/etcd.conf</code> and add (or make changes to) the following lines:<br />
[member]<br />
ETCD_LISTEN_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
[cluster]<br />
ETCD_ADVERTISE_CLIENT_URLS="<nowiki>http://0.0.0.0:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/apiserver</code> and add (or make changes to) the following lines:<br />
<pre><br />
# The address on the local server to listen to.<br />
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"<br />
KUBE_API_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port on the local server to listen on.<br />
KUBE_API_PORT="--port=8080"<br />
<br />
# Port minions listen on<br />
KUBELET_PORT="--kubelet-port=10250"<br />
<br />
# Comma separated list of nodes in the etcd cluster<br />
KUBE_ETCD_SERVERS="--etcd-servers=<nowiki>http://127.0.0.1:2379</nowiki>"<br />
<br />
# Address range to use for services<br />
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"<br />
<br />
# default admission control policies<br />
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"<br />
<br />
# Add your own!<br />
KUBE_API_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following etcd and Kubernetes services:<br />
<br />
$ for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE <br />
done<br />
<br />
* Check on the status of the above services (the following command should report 4 running services):<br />
$ systemctl status etcd kube-apiserver kube-controller-manager kube-scheduler | grep "(running)" | wc -l # => 4<br />
<br />
* Check on the status of the Kubernetes API server:<br />
$ kubectl cluster-info<br />
Kubernetes master is running at <nowiki>http://localhost:8080</nowiki><br />
$ curl <nowiki>http://localhost:8080/version</nowiki><br />
#~OR~<br />
$ curl <nowiki>http://k8s.master.dev:8080/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "2",<br />
"gitVersion": "v1.2.0",<br />
"gitCommit": "ec7364b6e3b155e78086018aa644057edbe196e5",<br />
"gitTreeState": "clean"<br />
}<br />
</pre><br />
<br />
* Get a list of Kubernetes API paths:<br />
$ curl <nowiki>http://k8s.master.dev:8080/paths</nowiki><br />
<pre><br />
{<br />
"paths": [<br />
"/api",<br />
"/api/v1",<br />
"/apis",<br />
"/apis/autoscaling",<br />
"/apis/autoscaling/v1",<br />
"/apis/batch",<br />
"/apis/batch/v1",<br />
"/apis/extensions",<br />
"/apis/extensions/v1beta1",<br />
"/healthz",<br />
"/healthz/ping",<br />
"/logs/",<br />
"/metrics",<br />
"/resetMetrics",<br />
"/swagger-ui/",<br />
"/swaggerapi/",<br />
"/ui/",<br />
"/version"<br />
]<br />
}<br />
</pre><br />
<br />
* List all available paths (key-value stores) known to ectd:<br />
$ etcdctl ls / --recursive<br />
<br />
The master controller in a Kubernetes cluster must have the following services running to function as the master host in the cluster:<br />
* ntpd<br />
* etcd<br />
* kube-controller-manager<br />
* kube-apiserver<br />
* kube-scheduler<br />
<br />
Note: The Docker daemon should not be running on the master host.<br />
<br />
===Install and configure the minions===<br />
''Note: Run the following commands/steps on all minion hosts.''<br />
<br />
* Log into the k8s minion hosts:<br />
$ vagrant ssh minion1 # do the same for minion2 and minion3<br />
<br />
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:<br />
KUBE_MASTER="--master=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
KUBE_ECTD_SERVERS="--etcd-servers=<nowiki>http://k8s.master.dev:2379</nowiki>"<br />
<br />
* Edit <code>/etc/kubernetes/kubelet</code> and add (or make changes to) the following lines:<br />
<pre><br />
###<br />
# kubernetes kubelet (minion) config<br />
<br />
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)<br />
KUBELET_ADDRESS="--address=0.0.0.0"<br />
<br />
# The port for the info server to serve on<br />
KUBELET_PORT="--port=10250"<br />
<br />
# You may leave this blank to use the actual hostname<br />
KUBELET_HOSTNAME="--hostname-override=k8s.minion1.dev" # ***CHANGE TO CORRECT MINION HOSTNAME***<br />
<br />
# location of the api-server<br />
KUBELET_API_SERVER="--api-servers=<nowiki>http://k8s.master.dev:8080</nowiki>"<br />
<br />
# pod infrastructure container<br />
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"<br />
<br />
# Add your own!<br />
KUBELET_ARGS=""<br />
</pre><br />
<br />
* Enable and start the following services:<br />
$ for SERVICE in kube-proxy kubelet docker; do<br />
systemctl restart $SERVICE<br />
systemctl enable $SERVICE<br />
systemctl status $SERVICE<br />
done<br />
<br />
* Test that Docker is running and can start containers:<br />
$ docker info<br />
$ docker pull hello-world<br />
$ docker run hello-world<br />
<br />
Each minion in a Kubernetes cluster must have the following services running to function as a member of the cluster (i.e., a "Ready" node):<br />
* ntpd<br />
* kubelet<br />
* kube-proxy<br />
* docker<br />
<br />
===Kubectl: Exploring our environment===<br />
''Note: Run all of the following commands on the master host.''<br />
<br />
* Get a list of nodes with <code>kubectl</code>:<br />
$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 20m<br />
k8s.minion2.dev Ready 12m<br />
k8s.minion3.dev Ready 12m<br />
</pre><br />
<br />
* Describe nodes with <code>kubectl</code>:<br />
<br />
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'<br />
$ kubectl get nodes -o jsonpath='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' | tr ';' "\n"<br />
<pre><br />
k8s.minion1.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion2.dev:OutOfDisk=False<br />
Ready=True<br />
k8s.minion3.dev:OutOfDisk=False<br />
Ready=True<br />
</pre><br />
<br />
* Get the man page for <code>kubectl</code>:<br />
$ man kubectl-get<br />
<br />
==Working with our Kubernetes cluster==<br />
<br />
''Note: The following section will be working from within the Kubernetes cluster we created above.''<br />
<br />
===Create and deploy pod definitions===<br />
<br />
* Turn off nodes 1 and 2:<br />
minion{1,2}$ systemctl stop kubelet kube-proxy<br />
<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 1h<br />
k8s.minion2.dev NotReady 37m<br />
k8s.minion3.dev NotReady 39m<br />
</pre><br />
<br />
* Check for any k8s Pods (there should be none):<br />
master$ kubectl get pods<br />
<br />
* Create a builds directory for our Pods:<br />
master$ mkdir builds && cd $_<br />
<br />
* Create a Pod running Nginx inside a Docker container:<br />
<pre><br />
master$ kubectl create -f - <<EOF<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
* Check on Pod creation status:<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 0/1 ContainerCreating 0 2s<br />
</pre><br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx 1/1 Running 0 3m<br />
</pre><br />
<br />
minion1$ docker ps<br />
<pre><br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
a718c6c0355d nginx:1.7.9 "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx.4580025_nginx_default_699e...<br />
</pre><br />
<br />
master$ kubectl describe pod nginx<br />
<br />
master$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1<br />
busybox$ wget -qO- 172.17.0.2<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete pod nginx<br />
<br />
* Port forwarding:<br />
master$ kubectl create -f nginx.yml # see above for YAML<br />
master$ kubectl port-forward nginx :80 &<br />
I1020 23:12:29.478742 23394 portforward.go:213] Forwarding from [::1]:40065 -> 80<br />
master$ curl -I localhost:40065<br />
<br />
===Tags, labels, and selectors===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-pod-label.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-pod-label.yml<br />
master$ kubectl get pods -l app=nginx<br />
master$ kubectl describe pods -l app=nginx<br />
<br />
* Add labels or overwrite existing ones:<br />
master$ kubectl label pods nginx new-label=mynginx<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=nginx<br />
master$ kubectl label pods nginx new-label=foo<br />
master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}'<br />
new-label=foo<br />
<br />
===Deployments===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-prod.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-prod<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-prod<br />
spec:<br />
containers:<br />
- name: nginx-deployment-prod<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create --validate -f nginx-deployment-dev.yml<br />
master$ kubectl create --validate -f nginx-deployment-prod.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 5m<br />
nginx-deployment-prod-3051195443-hj9b1 1/1 Running 0 12m<br />
</pre><br />
<br />
master$ kubectl describe deployments -l app=nginx-deployment-dev<br />
<pre><br />
Name: nginx-deployment-dev<br />
Namespace: default<br />
CreationTimestamp: Thu, 20 Oct 2016 23:48:46 +0000<br />
Labels: app=nginx-deployment-dev<br />
Selector: app=nginx-deployment-dev<br />
Replicas: 1 updated | 1 total | 1 available | 0 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-dev-2568522567 (1/1 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment-prod 1 1 1 1 44s<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-deployment-dev-update.yml<br />
---<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment-dev<br />
spec:<br />
replicas: 1<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx-deployment-dev<br />
spec:<br />
containers:<br />
- name: nginx-deployment-dev<br />
image: nginx:1.8 # ***CHANGED***<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl apply -f nginx-deployment-dev-update.yml<br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 0/1 ContainerCreating 0 27s<br />
</pre><br />
master$ kubectl get pods -l app=nginx-deployment-dev<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deployment-dev-104434401-jiiic 1/1 Running 0 6m<br />
</pre><br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment nginx-deployment-dev<br />
master$ kubectl delete deployment nginx-deployment-prod<br />
<br />
===Multi-Pod (container) replication controller===<br />
<br />
* Start the other two nodes (the ones we previously stopped):<br />
minion2$ systemctl start kubelet kube-proxy<br />
minion3$ systemctl start kubelet kube-proxy<br />
master$ kubectl get nodes<br />
<pre><br />
NAME STATUS AGE<br />
k8s.minion1.dev Ready 2h<br />
k8s.minion2.dev Ready 2h<br />
k8s.minion3.dev Ready 2h<br />
</pre><br />
<br />
<pre><br />
master$ cat << EOF > nginx-multi-node.yml<br />
---<br />
apiVersion: v1<br />
kind: ReplicationController<br />
metadata:<br />
name: nginx-www<br />
spec:<br />
replicas: 3<br />
selector:<br />
app: nginx<br />
template:<br />
metadata:<br />
name: nginx<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
master$ kubectl create -f nginx-multi-node.yml<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 0/1 ContainerCreating 0 10s<br />
nginx-www-416ct 0/1 ContainerCreating 0 10s<br />
nginx-www-ax41w 0/1 ContainerCreating 0 10s<br />
</pre><br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 1m<br />
nginx-www-416ct 1/1 Running 0 1m<br />
nginx-www-ax41w 1/1 Running 0 1m<br />
</pre><br />
<br />
master$ kubectl describe pods | awk '/^Node/{print $2}'<br />
<pre><br />
k8s.minion2.dev/192.168.200.102<br />
k8s.minion1.dev/192.168.200.101<br />
k8s.minion3.dev/192.168.200.103<br />
</pre><br />
<br />
minion1$ docker ps # 1 nginx container running<br />
minion2$ docker ps # 1 nginx container running<br />
minion3$ docker ps # 1 nginx container running<br />
minion3$ docker ps --format "<nowiki>{{.Image}}</nowiki>"<br />
<pre><br />
nginx<br />
gcr.io/google_containers/pause:2.0<br />
</pre><br />
<br />
master$ kubectl describe replicationcontroller<br />
<pre><br />
Name: nginx-www<br />
Namespace: default<br />
Image(s): nginx<br />
Selector: app=nginx<br />
Labels: app=nginx<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
...<br />
</pre><br />
<br />
* Attempt to delete one of the three pods:<br />
<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-2evxu 1/1 Running 0 11m<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
master$ kubectl delete pod nginx-www-2evxu<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-3cck4 1/1 Running 0 12s<br />
nginx-www-416ct 1/1 Running 0 11m<br />
nginx-www-ax41w 1/1 Running 0 11m<br />
</pre><br />
<br />
A new pod (<code>nginx-www-3cck4</code>) automatically started up. This is because the expected state, as defined in our YAML file, is for there to be 3 pods running at all times. Thus, if one or more of the pods were to go down, a new pod (or pods) will automatically start up to bring the state back to the expected state.<br />
<br />
* To force-delete all pods:<br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Create and deploy service definitions===<br />
<br />
<pre><br />
master$ cat << EOF > nginx-service.yml<br />
---<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: nginx-service<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
</pre><br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
</pre><br />
master$ kubectl create -f nginx-service.yml<br />
<br />
master$ kubectl get services<br />
<pre><br />
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes 10.254.0.1 <none> 443/TCP 3h<br />
nginx-service 10.254.110.127 <none> 8000/TCP 10s<br />
</pre><br />
<br />
master$ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i<br />
busybox$ wget -qO- 10.254.110.127:8000 # works<br />
<br />
* Cleanup<br />
master$ kubectl delete pod busybox<br />
master$ kubectl delete service nginx-service<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
nginx-www-jh2e9 1/1 Running 0 13m<br />
nginx-www-jir2g 1/1 Running 0 13m<br />
nginx-www-w91uw 1/1 Running 0 13m<br />
</pre><br />
master$ kubectl delete replicationcontroller nginx-www<br />
master$ kubectl get pods # nothing<br />
<br />
===Creating temporary Pods at the CLI===<br />
<br />
* Make sure we have no Pods running:<br />
master$ kubectl get pods<br />
<br />
* Create temporary deployment pod:<br />
master$ kubectl run mysample --image=foobar/apache<br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
mysample-1424711890-fhtxb 0/1 ContainerCreating 0 1s<br />
</pre><br />
master$ kubectl get deployment <br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
mysample 1 1 1 0 7s<br />
</pre><br />
<br />
* Create a temporary deployment pod (where we know it will fail):<br />
master$ kubectl run myexample --image=christophchamp/ubuntu_sysadmin<br />
master$ kubectl -o wide get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myexample-3534121234-mpr35 0/1 CrashLoopBackOff 12 39m k8s.minion3.dev<br />
mysample-2812764540-74c5h 1/1 Running 0 41m k8s.minion2.dev<br />
</pre><br />
<br />
* Check on why the "myexample" pod is in status "CrashLoopBackOff":<br />
master$ kubectl describe pods/myexample-3534121234-mpr35<br />
master$ kubectl describe deployments/mysample<br />
master$ kubectl describe pods/mysample-2812764540-74c5h | awk '/^Node/{print $2}'<br />
k8s.minion2.dev/192.168.200.102<br />
<br />
master$ kubectl delete deployment mysample<br />
<br />
* Run multiple replicas of the same pod:<br />
master$ kubectl run myreplicas --image=latest123/apache --replicas=2 --labels=app=myapache,version=1.0.0<br />
master$ kubectl describe deployment myreplicas <br />
<pre><br />
Name: myreplicas<br />
Namespace: default<br />
CreationTimestamp: Fri, 21 Oct 2016 19:10:30 +0000<br />
Labels: app=myapache,version=1.0.0<br />
Selector: app=myapache,version=1.0.0<br />
Replicas: 2 updated | 2 total | 1 available | 1 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 1 max unavailable, 1 max surge<br />
OldReplicaSets: <none><br />
NewReplicaSet: myreplicas-2209834598 (2/2 replicas created)<br />
...<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myreplicas-2209834598-5iyer 1/1 Running 0 1m k8s.minion1.dev<br />
myreplicas-2209834598-cslst 1/1 Running 0 1m k8s.minion2.dev<br />
</pre><br />
<br />
master$ kubectl describe pods -l version=1.0.0<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myreplicas<br />
<br />
===Interacting with Pod containers===<br />
<br />
* Create example Apache pod definition file:<br />
<pre><br />
master$ cat << EOF > apache.yml<br />
---<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: apache<br />
spec:<br />
containers:<br />
- name: apache<br />
image: latest123/apache<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
master$ kubectl create -f apache.yml<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
<br />
* Test pod and make some basic configuration changes:<br />
master$ kubectl exec apache date<br />
master$ kubectl exec mypod -i -t -- cat /var/www/html/index.html # default apache HTML<br />
master$ kubectl exec apache -i -t -- /bin/bash<br />
container$ export TERM=xterm<br />
container$ echo "xtof test" > /var/www/html/index.html<br />
minion3$ curl 172.17.0.2<br />
xtof test<br />
container$ exit<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
apache 1/1 Running 0 12m k8s.minion3.dev<br />
</pre><br />
Pod/container is still running even after we exited (as expected).<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Logs===<br />
<br />
* Start our example Apache pod to use for checking Kubernetes logging features:<br />
master$ kubectl create -f apache.yml <br />
master$ kubectl get pods<br />
<pre><br />
NAME READY STATUS RESTARTS AGE<br />
apache 1/1 Running 0 9s<br />
</pre><br />
master$ kubectl logs apache<br />
<pre><br />
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message<br />
</pre><br />
master$ kubectl logs --tail=10 apache<br />
master$ kubectl logs --since=24h apache # or 10s, 2m, etc.<br />
master$ kubectl logs -f apache # follow the logs<br />
master$ kubectl logs -f -c apache apache # where -c is the container ID<br />
<br />
* Cleanup:<br />
master$ kubectl delete pod apache<br />
<br />
===Autoscaling and scaling Pods===<br />
<br />
master$ kubectl run myautoscale --image=latest123/apache --port=80 --labels=app=myautoscale<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 47s k8s.minion3.dev<br />
</pre><br />
<br />
* Create an autoscale definition:<br />
master$ kubectl autoscale deployment myautoscale --min=2 --max=6 --cpu-percent=80<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 2 2 2 2 4m<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 3m k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4s k8s.minion2.dev<br />
</pre><br />
<br />
* Scale up an already autoscaled deployment:<br />
master$ kubectl scale --current-replicas=2 --replicas=4 deployment/myautoscale<br />
<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myautoscale 4 4 4 4 8m<br />
</pre><br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myautoscale-3243017378-2rxhp 1/1 Running 0 8s k8s.minion1.dev<br />
myautoscale-3243017378-kq4z7 1/1 Running 0 7m k8s.minion3.dev<br />
myautoscale-3243017378-ozxs8 1/1 Running 0 8s k8s.minion3.dev<br />
myautoscale-3243017378-r2f3d 1/1 Running 0 4m k8s.minion2.dev<br />
</pre><br />
<br />
* Scale down:<br />
master$ kubectl scale --current-replicas=4 --replicas=2 deployment/myautoscale<br />
<br />
Note: You can not scale down past the original minimum number of pods/containers specified in the original autoscale deployment (i.e., min=2 in our example).<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployment myautoscale<br />
<br />
===Failure and recovery===<br />
<br />
master$ kubectl run myrecovery --image=latest123/apache --port=80 --replicas=2 --labels=app=myrecovery<br />
master$ kubectl get deployments<br />
<pre><br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
myrecovery 2 2 2 2 6s<br />
</pre><br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-5xu8f 1/1 Running 0 12s k8s.minion1.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 12s k8s.minion2.dev<br />
</pre><br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the minions/nodes (so we have a total of 2 nodes online):<br />
minion1$ systemctl stop docker kubelet kube-proxy<br />
<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-qyi04 1/1 Running 0 7m k8s.minion3.dev<br />
myrecovery-563119102-zw6wp 1/1 Running 0 14m k8s.minion2.dev<br />
</pre><br />
Pod switch from minion1 to minion3.<br />
<br />
* Now stop Kubernetes- and Docker-related services on one of the remaining online minions/nodes (so we have a total of 1 node online):<br />
minion2$ systemctl stop docker kubelet kube-proxy<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-b5tim 1/1 Running 0 2m k8s.minion3.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 17m k8s.minion3.dev<br />
</pre><br />
Both Pods are now running on minion3, the only available node.<br />
<br />
* Start up Kubernetes- and Docker-related services again on minion1 and delete one of the Pods:<br />
minion1$ systemctl start docker kubelet kube-proxy<br />
master$ kubectl delete pod myrecovery-563119102-b5tim<br />
master$ kubectl get pods -o wide<br />
<pre><br />
NAME READY STATUS RESTARTS AGE NODE<br />
myrecovery-563119102-8unzg 1/1 Running 0 1m k8s.minion1.dev<br />
myrecovery-563119102-qyi04 1/1 Running 0 20m k8s.minion3.dev<br />
</pre><br />
Pods are now running on separate nodes.<br />
<br />
* Cleanup:<br />
master$ kubectl delete deployments/myrecovery<br />
<br />
==Minikube==<br />
[https://github.com/kubernetes/minikube Minikube] is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.<br />
<br />
* Install Minikube:<br />
$ curl -Lo minikube <nowiki>https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64</nowiki> \<br />
&& chmod +x minikube && sudo mv minikube /usr/local/bin/<br />
<br />
* Install kubectl<br />
$ curl -Lo kubectl <nowiki>https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl</nowiki> \<br />
&& chmod +x kubectl && sudo mv kubectl /usr/local/bin/<br />
<br />
* Test install<br />
$ minikube start<br />
#~OR~<br />
$ minikube start --memory 4096 # give it 4GB of RAM<br />
$ minikube status<br />
$ minikube dashboard<br />
$ kubectl config view<br />
$ kubectl cluster-info<br />
<br />
NOTE: If you have an old version of minikube installed, you should probably do the following before upgrading to a much newer version:<br />
$ minikube delete --all --purge<br />
<br />
Get the details on the CLI options for kubectl [https://kubernetes.io/docs/reference/kubectl/overview/ here].<br />
<br />
Using the <code>`kubectl proxy`</code> command, kubectl will authenticate with the API Server on the Master Node and would make the dashboard available on <nowiki>http://localhost:8001/ui</nowiki>:<br />
<br />
$ kubectl proxy<br />
Starting to serve on 127.0.0.1:8001<br />
<br />
After running the above command, we can access the dashboard at <code><nowiki>http://127.0.0.1:8001/ui</nowiki></code>.<br />
<br />
Once the kubectl proxy is configured, we can send requests to localhost on the proxy port:<br />
<br />
$ curl <nowiki>http://localhost:8001/</nowiki><br />
$ curl <nowiki>http://localhost:8001/version</nowiki><br />
<pre><br />
{<br />
"major": "1",<br />
"minor": "8",<br />
"gitVersion": "v1.8.0",<br />
"gitCommit": "0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4",<br />
"gitTreeState": "clean",<br />
"buildDate": "2017-11-29T22:43:34Z",<br />
"goVersion": "go1.9.1",<br />
"compiler": "gc",<br />
"platform": "linux/amd64"<br />
}<br />
</pre><br />
<br />
Without kubectl proxy configured, we can get the Bearer Token using kubectl, and then send it with the API request. A Bearer Token is an access token which is generated by the authentication server (the API server on the Master Node) and given back to the client. Using that token, the client can connect back to the Kubernetes API server without providing further authentication details, and then, access resources.<br />
<br />
* Get the k8s token:<br />
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | awk '/^default/{print $1}') | awk '/^token/{print $2}')<br />
<br />
* Get the k8s API server endpoint:<br />
$ APISERVER=$(kubectl config view | awk '/https/{print $2}')<br />
<br />
* Access the API Server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}<br />
<br />
===Using Minikube as a local Docker registry===<br />
<br />
Sometimes it is useful to have a local Docker registry for Kubernetes to pull images from. As the Minikube [https://github.com/kubernetes/minikube/blob/0c616a6b42b28a1aab8397f5a9061f8ebbd9f3d9/README.md#reusing-the-docker-daemon README] describes, you can reuse the Docker daemon running within Minikube with <code>eval $(minikube docker-env)</code> to build and pull images from.<br />
<br />
To use an image without uploading it to some external resgistry (e.g., Docker Hub), you can follow these steps:<br />
* Set the environment variables with <code>eval $(minikube docker-env)</code><br />
* Build the image with the Docker daemon of Minikube (e.g., <code>docker build -t my-image .</code>)<br />
* Set the image in the pod spec like the build tag (e.g., <code>my-image</code>)<br />
* Set the <code>imagePullPolicy</code> to <code>Never</code>, otherwise Kubernetes will try to download the image.<br />
<br />
Important note: You have to run <code>eval $(minikube docker-env)</code> on each terminal you want to use since it only sets the environment variables for the current shell session.<br />
<br />
===Working with our Minikube-based Kubernetes cluster===<br />
<br />
;Kubernetes Object Model<br />
<br />
Kubernetes has a very rich object model, with which it represents different persistent entities in the Kubernetes cluster. Those entities describe:<br />
<br />
* What containerized applications we are running and on which node<br />
* Application resource consumption<br />
* Different policies attached to applications, like restart/upgrade policies, fault tolerance, etc.<br />
<br />
With each object, we declare our intent or desired state using the '''spec''' field. The Kubernetes system manages the '''status''' field for objects, in which it records the actual state of the object. At any given point in time, the Kubernetes Control Plane tries to match the object's actual state to the object's desired state.<br />
<br />
Examples of Kubernetes objects are Pods, Deployments, ReplicaSets, etc.<br />
<br />
To create an object, we need to provide the '''spec''' field to the Kubernetes API Server. The '''spec''' field describes the desired state, along with some basic information, like the name. The API request to create the object must have the '''spec''' field, as well as other details, in a JSON format. Most often, we provide an object's definition in a YAML file, which is converted by kubectl in a JSON payload and sent to the API Server.<br />
<br />
Below is an example of a ''Deployment'' object:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
With the '''apiVersion''' field in the example above, we mention the API endpoint on the API Server which we want to connect to. Note that you can see what API version to use with the following call to the API server:<br />
$ curl -k -H "Authorization: Bearer ${TOKEN}" ${APISERVER}/apis/apps<br />
Use the '''preferredVersion''' for most cases.<br />
<br />
With the '''kind''' field, we mention the object type &mdash; in our case, we have '''Deployment'''. With the '''metadata''' field, we attach the basic information to objects, like the name. Notice that in the above we have two '''spec''' fields ('''spec''' and '''spec.template.spec'''). With '''spec''', we define the desired state of the deployment. In our example, we want to make sure that, at any point in time, at least 3 ''Pods'' are running, which are created using the Pod template defined in '''spec.template'''. In '''spec.template.spec''', we define the desired state of the Pod (here, our Pod would be created using nginx:1.7.9).<br />
<br />
Once the object is created, the Kubernetes system attaches the '''status''' field to the object.<br />
<br />
;Connecting users to Pods<br />
<br />
To access the application, a user/client needs to connect to the Pods. As Pods are ephemeral in nature, resources like IP addresses allocated to it cannot be static. Pods could die abruptly or be rescheduled based on existing requirements.<br />
<br />
As an example, consider a scenario in which a user/client is connecting to a Pod using its IP address. Unexpectedly, the Pod to which the user/client is connected dies and a new Pod is created by the controller. The new Pod will have a new IP address, which will not be known automatically to the user/client of the earlier Pod. To overcome this situation, Kubernetes provides a higher-level abstraction called ''[https://kubernetes.io/docs/concepts/services-networking/service/ Service]'', which logically groups Pods and a policy to access them. This grouping is achieved via Labels and Selectors (see above).<br />
<br />
So, for our example, we would use Selectors (e.g., "<code>app==frontend</code>" and "<code>app==db</code>") to group our Pods into two logical groups. We can assign a name to the logical grouping, referred to as a "service name". In our example, we have created two Services, <code>frontend-svc</code> and <code>db-svc</code>, and they have the "<code>app==frontend</code>" and the "<code>app==db</code>" Selectors, respectively.<br />
<br />
The following is an example of a Service object:<br />
<pre><br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: frontend-svc<br />
spec:<br />
selector:<br />
app: frontend<br />
ports:<br />
- protocol: TCP<br />
port: 80<br />
targetPort: 5000<br />
</pre><br />
<br />
in which we are creating a <code>frontend-svc</code> Service by selecting all the Pods that have the Label "<code>app</code>" equal to "<code>frontend</code>". By default, each Service also gets an IP address, which is routable only inside the cluster. In our case, we have 172.17.0.4 and 172.17.0.5 IP addresses for our <code>frontend-svc</code> and <code>db-svc</code> Services, respectively. The IP address attached to each Service is also known as the ClusterIP for that Service.<br />
<br />
+------------------------------------+<br />
| select: app==frontend | container (app:frontend; 10.0.1.3)<br />
| service=frontend-svc (172.17.0.4) |------> container (app:frontend; 10.0.1.4)<br />
+------------------------------------+ container (app:frontend; 10.0.1.5)<br />
^<br />
/<br />
/<br />
user/client<br />
\<br />
\<br />
v<br />
+------------------------------------+<br />
| select: app==db |------> container (app:db; 10.0.1.10)<br />
| service=db-svc (172.17.0.5) |<br />
+------------------------------------+<br />
<br />
The user/client now connects to a Service via ''its'' IP address, which forwards the traffic to one of the Pods attached to it. A Service does the load balancing while selecting the Pods for forwarding the data/traffic.<br />
<br />
While forwarding the traffic from the Service, we can select the target port on the Pod. In our example, for <code>frontend-svc</code>, we will receive requests from the user/client on port 80. We will then forward these requests to one of the attached Pods on port 5000. If the target port is not defined explicitly, then traffic will be forwarded to Pods on the port on which the Service receives traffic.<br />
<br />
A tuple of Pods, IP addresses, along with the <code>targetPort</code> is referred to as a ''Service Endpoint''. In our case, <code>frontend-svc</code> has 3 Endpoints: <code>10.0.1.3:5000</code>, <code>10.0.1.4:5000</code>, and <code>10.0.1.5:5000</code>.<br />
<br />
===kube-proxy===<br />
All of the Worker Nodes run a daemon called kube-proxy, which watches the API Server on the Master Node for the addition and removal of Services and endpoints. For each new Service, on each node, kube-proxy configures the IPtables rules to capture the traffic for its ClusterIP and forwards it to one of the endpoints. When the Service is removed, kube-proxy removes the IPtables rules on all nodes as well.<br />
<br />
===Service discovery===<br />
As Services are the primary mode of communication in Kubernetes, we need a way to discover them at runtime. Kubernetes supports two methods of discovering a Service:<br />
<br />
;Environment Variables : As soon as the Pod starts on any Worker Node, the kubelet daemon running on that node adds a set of environment variables in the Pod for all active Services. For example, if we have an active Service called <code>redis-master</code>, which exposes port 6379, and its ClusterIP is 172.17.0.6, then, on a newly created Pod, we can see the following environment variables:<br />
<br />
REDIS_MASTER_SERVICE_HOST=172.17.0.6<br />
REDIS_MASTER_SERVICE_PORT=6379<br />
REDIS_MASTER_PORT=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP=tcp://172.17.0.6:6379<br />
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp<br />
REDIS_MASTER_PORT_6379_TCP_PORT=6379<br />
REDIS_MASTER_PORT_6379_TCP_ADDR=172.17.0.6<br />
<br />
With this solution, we need to be careful while ordering our Services, as the Pods will not have the environment variables set for Services which are created after the Pods are created.<br />
<br />
;DNS : Kubernetes has an add-on for DNS, which creates a DNS record for each Service and its format is like <code>my-svc.my-namespace.svc.cluster.local</code>. Services within the same namespace can reach other services with just their name. For example, if we add a Service <code>redis-master</code> in the <code>my-ns</code> Namespace, then all the Pods in the same Namespace can reach to the redis Service just by using its name, <code>redis-master</code>. Pods from other Namespaces can reach the Service by adding the respective Namespace as a suffix, like <code>redis-master.my-ns</code>.<br />
: This is the most common and highly recommended solution. For example, in the previous section's image, we have seen that an internal DNS is configured, which maps our services <code>frontend-svc</code> and <code>db-svc</code> to 172.17.0.4 and 172.17.0.5, respectively.<br />
<br />
===Service Type===<br />
While defining a Service, we can also choose its access scope. We can decide whether the Service:<br />
<br />
* is only accessible within the cluster;<br />
* is accessible from within the cluster and the external world; or<br />
* maps to an external entity which resides outside the cluster.<br />
<br />
Access scope is decided by ''ServiceType'', which can be mentioned when creating the Service.<br />
<br />
;ClusterIP : (the default ''ServiceType''.) A Service gets its Virtual IP address using the ClusterIP. That IP address is used for communicating with the Service and is accessible only within the cluster. <br />
<br />
;NodePort : With this ''ServiceType'', in addition to creating a ClusterIP, a port from the range '''30000-32767''' is mapped to the respective service from all the Worker Nodes. For example, if the mapped NodePort is 32233 for the service <code>frontend-svc</code>, then, if we connect to any Worker Node on port 32233, the node would redirect all the traffic to the assigned ClusterIP (172.17.0.4).<br />
: By default, while exposing a NodePort, a random port is automatically selected by the Kubernetes Master from the port range '''30000-32767'''. If we do not want to assign a dynamic port value for NodePort, then, while creating the Service, we can also give a port number from the earlier specific range.<br />
: The NodePort ServiceType is useful when we want to make our services accessible from the external world. The end-user connects to the Worker Nodes on the specified port, which forwards the traffic to the applications running inside the cluster. To access the application from the external world, administrators can configure a reverse proxy outside the Kubernetes cluster and map the specific endpoint to the respective port on the Worker Nodes.<br />
<br />
;LoadBalancer: With this ''ServiceType'', we have the following:<br />
:* NodePort and ClusterIP Services are automatically created, and the external load balancer will route to them;<br />
:* The Services are exposed at a static port on each Worker Node; and<br />
:* The Service is exposed externally using the underlying Cloud provider's load balancer feature.<br />
: The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS.<br />
<br />
;ExternalIP : A Service can be mapped to an ExternalIP address if it can route to one or more of the Worker Nodes. Traffic that is ingressed into the cluster with the ExternalIP (as destination IP) on the Service port, gets routed to one of the the Service endpoints. (Note that ExternalIPs are not managed by Kubernetes. The cluster administrator(s) must have configured the routing to map the ExternalIP address to one of the nodes.)<br />
<br />
;ExternalName : a special ''ServiceType'', which has no Selectors and does not define any endpoints. When accessed within the cluster, it returns a CNAME record of an externally configured service.<br />
: The primary use case of this ServiceType is to make externally configured services like <code>my-database.example.com</code> available inside the cluster, using just the name, like <code>my-database</code>, to other services inside the same Namespace.<br />
<br />
===Deploying a application===<br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
<br />
<pre><br />
$ kubectl create -f - <<EOF<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
EOF<br />
</pre><br />
<br />
$ kubectl get service<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h<br />
web-service NodePort 10.104.107.132 <none> 80:32610/TCP 7m<br />
<br />
Note that "<code>32610</code>" port.<br />
<br />
* Get the IP address of your Minikube k8s cluster<br />
$ minikube ip<br />
192.168.99.100<br />
#~OR~<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
* Now, check that your web service is serving up a default Nginx website:<br />
$ curl -I <nowiki>http://192.168.99.100:32610</nowiki><br />
HTTP/1.1 200 OK<br />
Server: nginx/1.13.8<br />
Date: Thu, 11 Jan 2018 00:27:51 GMT<br />
Content-Type: text/html<br />
Content-Length: 612<br />
Last-Modified: Wed, 10 Jan 2018 04:10:03 GMT<br />
Connection: keep-alive<br />
ETag: "5a55921b-264"<br />
Accept-Ranges: bytes<br />
<br />
Looks good!<br />
<br />
Finally, destroy the webserver deployment:<br />
$ kubectl delete deployments webserver<br />
<br />
===Using Ingress with Minikube===<br />
<br />
* First check that the Ingress add-on is enabled:<br />
$ minikube addons list | grep ingress<br />
- ingress: disabled<br />
<br />
If it is not, enable it with:<br />
$ minikube addons enable ingress<br />
$ minikube addons list | grep ingress<br />
- ingress: enabled<br />
<br />
* Create an Echo Server Deployment:<br />
<pre><br />
$ cat << EOF >deploy-echoserver.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: echoserver<br />
template:<br />
metadata:<br />
labels:<br />
run: echoserver<br />
spec:<br />
containers:<br />
- image: gcr.io/google_containers/echoserver:1.4<br />
imagePullPolicy: IfNotPresent<br />
name: echoserver<br />
ports:<br />
- containerPort: 8080<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-echoserver.yml<br />
<br />
* Create the Cheddar cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-cheddar-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: cheddar-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:cheddar<br />
imagePullPolicy: IfNotPresent<br />
name: cheddar-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
$ kubectl create --validate -f deploy-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Deployment:<br />
<pre><br />
$ cat << EOF >deploy-stilton-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
replicas: 1<br />
selector:<br />
matchLabels:<br />
run: stilton-cheese<br />
template:<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
spec:<br />
containers:<br />
- image: errm/cheese:stilton<br />
imagePullPolicy: IfNotPresent<br />
name: stilton-cheese<br />
ports:<br />
- containerPort: 80<br />
protocol: TCP<br />
dnsPolicy: ClusterFirst<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create the Echo Server Service:<br />
<pre><br />
$ cat << EOF >svc-echoserver.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: echoserver<br />
name: echoserver<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 31116<br />
port: 8080<br />
protocol: TCP<br />
targetPort: 8080<br />
selector:<br />
run: echoserver<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-echoserver.yml<br />
<br />
* Create the Cheddar cheese Service:<br />
<pre><br />
$ cat << EOF >svc-cheddar-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: cheddar-cheese<br />
name: cheddar-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 32467<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: cheddar-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
</pre><br />
$ kubectl create --validate -f svc-cheddar-cheese.yml<br />
<br />
* Create the Stilton cheese Service:<br />
<pre><br />
$ cat << EOF >svc-stilton-cheese.yml<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
labels:<br />
run: stilton-cheese<br />
name: stilton-cheese<br />
namespace: default<br />
spec:<br />
externalTrafficPolicy: Cluster<br />
ports:<br />
- nodePort: 30197<br />
port: 80<br />
protocol: TCP<br />
targetPort: 80<br />
selector:<br />
run: stilton-cheese<br />
sessionAffinity: None<br />
type: NodePort<br />
status:<br />
loadBalancer: {}<br />
</pre><br />
$ kubectl create --validate -f svc-stilton-cheese.yml<br />
<br />
* Create the Ingress for the above Services:<br />
<pre><br />
$ cat << EOF >ingress-cheese.yml<br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: ingress-cheese<br />
annotations:<br />
nginx.ingress.kubernetes.io/rewrite-target: /<br />
spec:<br />
backend:<br />
serviceName: default-http-backend<br />
servicePort: 80<br />
rules:<br />
- host: myminikube.info<br />
http:<br />
paths:<br />
- path: /<br />
backend:<br />
serviceName: echoserver<br />
servicePort: 8080<br />
- host: cheeses.all<br />
http:<br />
paths:<br />
- path: /stilton<br />
backend:<br />
serviceName: stilton-cheese<br />
servicePort: 80<br />
- path: /cheddar<br />
backend:<br />
serviceName: cheddar-cheese<br />
servicePort: 80<br />
</pre><br />
$ kubectl create --validate -f ingress-cheese.yml<br />
<br />
* Check that everything is up:<br />
<pre><br />
$ kubectl get all<br />
NAME READY STATUS RESTARTS AGE<br />
pod/cheddar-cheese-d6d6587c7-4bgcz 1/1 Running 0 12m<br />
pod/echoserver-55f97d5bff-pdv65 1/1 Running 0 12m<br />
pod/stilton-cheese-6d64cbc79-g7h4w 1/1 Running 0 12m<br />
<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
service/cheddar-cheese NodePort 10.109.238.92 <none> 80:32467/TCP 12m<br />
service/echoserver NodePort 10.98.60.194 <none> 8080:31116/TCP 12m<br />
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h<br />
service/stilton-cheese NodePort 10.108.175.207 <none> 80:30197/TCP 12m<br />
<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
deployment.apps/cheddar-cheese 1 1 1 1 12m<br />
deployment.apps/echoserver 1 1 1 1 12m<br />
deployment.apps/stilton-cheese 1 1 1 1 12m<br />
<br />
NAME DESIRED CURRENT READY AGE<br />
replicaset.apps/cheddar-cheese-d6d6587c7 1 1 1 12m<br />
replicaset.apps/echoserver-55f97d5bff 1 1 1 12m<br />
replicaset.apps/stilton-cheese-6d64cbc79 1 1 1 12m<br />
<br />
$ kubectl get ing<br />
NAME HOSTS ADDRESS PORTS AGE<br />
ingress-cheese myminikube.info,cheeses.all 10.0.2.15 80 12m<br />
</pre><br />
<br />
* Add your host aliases:<br />
$ echo "$(minikube ip) myminikube.info cheeses.all" | sudo tee -a /etc/hosts<br />
<br />
* Now, either using your browser or [[curl]], check that you can reach all of the endpoints defined in the Ingress:<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/cheddar/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null cheeses.all/stilton/ # Should return '200'<br />
$ curl -sI -w "%{http_code}\n" -o /dev/null myminikube.info # Should return '200'<br />
<br />
* You can also see the Nginx logs for the above requests with:<br />
$ kubectl --namespace kube-system logs \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller<br />
<br />
* You can also view the Nginx configuration file (and the settings created by the above Ingress) with:<br />
$ NGINX_POD=$(kubectl --namespace kube-system get pods \<br />
--selector app.kubernetes.io/name=nginx-ingress-controller \<br />
--output jsonpath='{.items[0].metadata.name}')<br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- cat /etc/nginx/nginx.conf<br />
<br />
* Get the version of the Nginx Ingress controller installed:<br />
<pre><br />
$ kubectl --namespace kube-system exec -it ${NGINX_POD} -- /nginx-ingress-controller --version<br />
-------------------------------------------------------------------------------<br />
NGINX Ingress controller<br />
Release: 0.19.0<br />
Build: git-05025d6<br />
Repository: https://github.com/kubernetes/ingress-nginx.git<br />
-------------------------------------------------------------------------------<br />
</pre><br />
<br />
==Kubectl==<br />
<br />
<code>kubectl</code> controls the Kubernetes cluster manager.<br />
<br />
* View your current configuration:<br />
$ kubectl config view<br />
<br />
* Switch between clusters:<br />
$ kubectl config use-context <context_name><br />
<br />
* Remove a cluster:<br />
$ kubectl config unset contexts.<context_name><br />
$ kubectl config unset users.<user_name><br />
$ kubectl config unset clusters.<cluster_name><br />
<br />
* Sort Pods by age:<br />
$ kubectl get po --sort-by='{.firstTimestamp}'.<br />
$ kubectl get pods --all-namespaces --sort-by=.metadata.creationTimestamp<br />
<br />
* Backup all primitives deployed in a given k8s cluster:<br />
<pre><br />
$ kubectl api-resources --verbs=list --namespaced -o name \<br />
| xargs -n1 -I{} bash -c "kubectl get {} --all-namespaces -oyaml && echo ---" \<br />
> k8s_backup.yaml<br />
</pre><br />
<br />
===kubectl explain===<br />
<br />
;List the fields for supported resources.<br />
<br />
* Get the documentation of a resource (aka "kind") and its fields:<br />
<pre><br />
$ kubectl explain deployment<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
APIVersion defines the versioned schema of this representation of an<br />
object. Servers should convert recognized schemas to the latest internal<br />
value, and may reject unrecognized values. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources<br />
<br />
kind <string><br />
Kind is a string value representing the REST resource this object<br />
represents. Servers may infer this from the endpoint the client submits<br />
requests to. Cannot be updated. In CamelCase. More info:<br />
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds<br />
<br />
metadata <Object><br />
Standard object metadata.<br />
<br />
spec <Object><br />
Specification of the desired behavior of the Deployment.<br />
<br />
status <Object><br />
Most recently observed status of the Deployment<br />
</pre><br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
</pre><br />
<br />
* Get a list of ''all'' allowable fields for a given primitive:<br />
<pre><br />
$ kubectl explain deployment --recursive | head<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
DESCRIPTION:<br />
Deployment enables declarative updates for Pods and ReplicaSets.<br />
<br />
FIELDS:<br />
apiVersion <string><br />
kind <string><br />
metadata <Object><br />
</pre><br />
<br />
* Get documentation ("man page"-style) for a given field in a given primitive:<br />
<pre><br />
$ kubectl explain deployment.status.availableReplicas<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
<br />
FIELD: availableReplicas <integer><br />
<br />
DESCRIPTION:<br />
Total number of available pods (ready for at least minReadySeconds)<br />
targeted by this deployment.<br />
</pre><br />
<br />
===Merge kubeconfig files===<br />
<br />
* Reference which kubeconfig files you wish to merge:<br />
$ export KUBECONFIG=$HOME/.kube/dev.yaml:$HOME/.kube/prod.yaml<br />
<br />
* Flatten them:<br />
$ kubectl config view --flatten >> $HOME/.kube/config<br />
<br />
* Unset:<br />
$ unset KUBECONFIG<br />
<br />
Merge complete.<br />
<br />
==Namespaces==<br />
<br />
See: [https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Namespaces] in the official documentation.<br />
<br />
; Create a Namespace<br />
<br />
<pre><br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: dev<br />
</pre><br />
<br />
==Pods==<br />
<br />
; Create a Pod that has an Init Container<br />
<br />
In this example, I will create a Pod that has one application Container and one Init Container. The init container runs to completion before the application container starts.<br />
<br />
<pre><br />
$ cat << EOF >init-demo.yml<br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: init-demo<br />
labels:<br />
app: demo<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: /usr/share/nginx/html<br />
# These containers are run during pod initialization<br />
initContainers:<br />
- name: install<br />
image: busybox<br />
command:<br />
- wget<br />
- "-O"<br />
- "/work-dir/index.html"<br />
- https://example.com<br />
volumeMounts:<br />
- name: workdir<br />
mountPath: "/work-dir"<br />
dnsPolicy: Default<br />
volumes:<br />
- name: workdir<br />
emptyDir: {}<br />
EOF<br />
</pre><br />
<br />
The above Pod YAML will first create the init container using the busybox image, which will download the HTML of the example.com website and save it to a file (<code>index.html</code>) on the Pod volume called "workdir". After the init container completes, the Nginx container starts and presents the <code>index.html</code> on port 80 (the file is located at <code>/usr/share/nginx/index.html</code> inside the Nginx container as a volume mount).<br />
<br />
* Now, create this Pod:<br />
$ kubectl create --validate -f init-demo.yml<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat << EOF >example.yml<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: example<br />
spec:<br />
ports:<br />
- port: 8000<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: demo<br />
</pre><br />
<br />
* Check that we can get the header of <nowiki>https://example.com</nowiki>:<br />
$ curl -sI $(kubectl get svc/foo-svc -o jsonpath='{.spec.clusterIP}'):8000 | grep ^HTTP<br />
HTTP/1.1 200 OK<br />
<br />
==Deployments==<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Deployment]'' controller provides declarative updates for Pods and ReplicaSets.<br />
<br />
You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.<br />
<br />
; Creating a Deployment<br />
<br />
The following is an example of a Deployment. It creates a ReplicaSet to bring up three [https://hub.docker.com/_/nginx/ Nginx] Pods:<br />
<pre><br />
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
labels:<br />
app: nginx<br />
spec:<br />
replicas: 3<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Check the syntax of the Deployment (YAML):<br />
$ kubectl create -f nginx-deployment.yml --dry-run<br />
deployment.apps/nginx-deployment created (dry run)<br />
<br />
* Create the Deployment:<br />
$ kubectl create --record -f nginx-deployment.yml <br />
deployment "nginx-deployment" created<br />
Note: By appending <code>--record</code> to the above command, we are telling the API to record the current command in the annotations of the created or updated resource. This is useful for future review, such as investigating which commands were executed in each Deployment revision.<br />
<br />
* Get information about our Deployment:<br />
$ kubectl get deployments<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 24s<br />
<br />
$ kubectl describe deployment/nginx-deployment<br />
<pre><br />
Name: nginx-deployment<br />
Namespace: default<br />
CreationTimestamp: Tue, 30 Jan 2018 23:28:43 +0000<br />
Labels: app=nginx<br />
Annotations: deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Selector: app=nginx<br />
Replicas: 3 desired | 3 updated | 3 total | 0 available | 3 unavailable<br />
StrategyType: RollingUpdate<br />
MinReadySeconds: 0<br />
RollingUpdateStrategy: 25% max unavailable, 25% max surge<br />
Pod Template:<br />
Labels: app=nginx<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Conditions:<br />
Type Status Reason<br />
---- ------ ------<br />
Available False MinimumReplicasUnavailable<br />
Progressing True ReplicaSetUpdated<br />
OldReplicaSets: <none><br />
NewReplicaSet: nginx-deployment-6c54bd5869 (3/3 replicas created)<br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal ScalingReplicaSet 28s deployment-controller Scaled up replica set nginx-deployment-6c54bd5869 to 3<br />
</pre><br />
<br />
* Get information about the ReplicaSet created by the above Deployment:<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-6c54bd5869 3 3 3 3m<br />
<br />
$ kubectl describe rs/nginx-deployment-6c54bd5869<br />
<pre><br />
Name: nginx-deployment-6c54bd5869<br />
Namespace: default<br />
Selector: app=nginx,pod-template-hash=2710681425<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Annotations: deployment.kubernetes.io/desired-replicas=3<br />
deployment.kubernetes.io/max-replicas=4<br />
deployment.kubernetes.io/revision=1<br />
kubernetes.io/change-cause=kubectl create --record=true --filename=nginx-deployment.yml<br />
Controlled By: Deployment/nginx-deployment<br />
Replicas: 3 current / 3 desired<br />
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=2710681425<br />
Containers:<br />
nginx:<br />
Image: nginx:1.7.9<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-k9mh4<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-pphjt<br />
Normal SuccessfulCreate 4m replicaset-controller Created pod: nginx-deployment-6c54bd5869-n4fj5<br />
</pre><br />
<br />
* Get information about the Pods created by this Deployment:<br />
$ kubectl get pods --show-labels -l app=nginx -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE LABELS<br />
nginx-deployment-6c54bd5869-k9mh4 1/1 Running 0 5m 10.244.1.5 k8s.worker1.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-n4fj5 1/1 Running 0 5m 10.244.1.6 k8s.worker2.local app=nginx,pod-template-hash=2710681425<br />
nginx-deployment-6c54bd5869-pphjt 1/1 Running 0 5m 10.244.1.7 k8s.worker3.local app=nginx,pod-template-hash=2710681425<br />
<br />
;Updating a Deployment<br />
<br />
Note: A Deployment's rollout is triggered if, and only if, the Deployment's pod template (that is, <code>.spec.template</code>) is changed (for example, if the labels or container images of the template are updated). Other updates, such as scaling the Deployment, do not trigger a rollout.<br />
<br />
Suppose that we want to update the Nginx Pods in the above Deployment to use the <code>nginx:1.9.1</code> image instead of the <code>nginx:1.7.9</code> image.<br />
<br />
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
deployment "nginx-deployment" image updated<br />
<br />
Alternatively, we can edit the Deployment and change <code>.spec.template.spec.containers[0].image</code> from <code>nginx:1.7.9</code> to <code>nginx:1.9.1</code>:<br />
<br />
$ kubectl edit deployment/nginx-deployment<br />
deployment "nginx-deployment" edited<br />
<br />
* Check on the rollout status:<br />
<pre><br />
$ kubectl rollout status deployment/nginx-deployment<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
Waiting for rollout to finish: 1 old replicas are pending termination...<br />
deployment "nginx-deployment" successfully rolled out<br />
</pre><br />
<br />
* Get information about the updated Deployment:<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deployment 3 3 3 3 18m<br />
<br />
$ kubectl get rs<br />
NAME DESIRED CURRENT READY AGE<br />
nginx-deployment-5964dfd755 3 3 3 1m # <- new ReplicaSet using nginx:1.9.1<br />
nginx-deployment-6c54bd5869 0 0 0 17m # <- old ReplicaSet using nginx:1.7.9<br />
<br />
$ kubectl rollout history deployment/nginx-deployment<br />
deployments "nginx-deployment"<br />
REVISION CHANGE-CAUSE<br />
1 kubectl create --record=true --filename=nginx-deployment.yml<br />
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
<br />
$ kubectl rollout history deployment/nginx-deployment --revision=2<br />
<br />
deployments "nginx-deployment" with revision #2<br />
Pod Template:<br />
Labels: app=nginx<br />
pod-template-hash=1520898311<br />
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1<br />
Containers:<br />
nginx:<br />
Image: nginx:1.9.1<br />
Port: 80/TCP<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
<br />
; Rolling back to a previous revision<br />
<br />
Undo the current rollout and rollback to the previous revision:<br />
$ kubectl rollout undo deployment/nginx-deployment<br />
deployment "nginx-deployment" rolled back<br />
<br />
Alternatively, you can rollback to a specific revision by specify that in --to-revision:<br />
$ kubectl rollout undo deployment/nginx-deployment --to-revision=1<br />
deployment "nginx-deployment" rolled back<br />
<br />
==Volume management==<br />
On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. First, when a container crashes, kubelet will restart it, but the files will be lost (i.e., the container starts with a clean state). Second, when running containers together in a Pod it is often necessary to share files between those containers. The Kubernetes ''[https://kubernetes.io/docs/concepts/storage/volumes/ Volumes]'' abstraction solves both of these problems. A Volume is essentially a directory backed by a storage medium. The storage medium and its content are determined by the Volume Type.<br />
<br />
In Kubernetes, a Volume is attached to a Pod and shared among the containers of that Pod. The Volume has the same life span as the Pod, and it outlives the containers of the Pod &mdash; this allows data to be preserved across container restarts.<br />
<br />
Kubernetes resolves the problem of persistent storage with the Persistent Volume subsystem, which provides APIs for users and administrators to manage and consume storage. To manage the Volume, it uses the PersistentVolume (PV) API resource type, and to consume it, it uses the PersistentVolumeClaim (PVC) API resource type.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes PersistentVolume] (PV) : a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.<br />
<br />
; [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims PersistentVolumeClaim] (PVC) : a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Persistent Volume Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).<br />
<br />
A Persistent Volume is a network-attached storage in the cluster, which is provisioned by the administrator.<br />
<br />
Persistent Volumes can be provisioned statically by the administrator, or dynamically, based on the StorageClass resource. A StorageClass contains pre-defined provisioners and parameters to create a Persistent Volume.<br />
<br />
A PersistentVolumeClaim (PVC) is a request for storage by a user. Users request Persistent Volume resources based on size, access modes, etc. Once a suitable Persistent Volume is found, it is bound to a Persistent Volume Claim. After a successful bind, the Persistent Volume Claim resource can be used in a Pod. Once a user finishes its work, the attached Persistent Volumes can be released. The underlying Persistent Volumes can then be reclaimed and recycled for future usage. See [https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims Persistent Volumes] for details.<br />
<br />
;Access Modes<br />
* Each of the following access modes ''must'' be supported by storage resource provider (e.g., NFS, AWS EBS, etc.) if they are to be used.<br />
* ReadWriteOnce (RWO) &mdash; volume can be mounted as read/write by one node only.<br />
* ReadOnlyMany (ROX) &mdash; volume can be mounted read-only by many nodes.<br />
* ReadWriteMany (RWX) &mdash; volume can be mounted read/write by many nodes.<br />
A volume can only be mounted using one access mode at a time, regardless of the modes that are supported.<br />
<br />
; Example #1 - Using Host Volumes<br />
As an example of how to use volumes, we can modify our previous "webserver" Deployment (see above) to look like the following:<br />
<br />
$ cat webserver.yml<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
spec:<br />
replicas: 3<br />
template:<br />
metadata:<br />
labels:<br />
app: webserver<br />
spec:<br />
containers:<br />
- name: webserver<br />
image: nginx:alpine<br />
ports:<br />
- containerPort: 80<br />
volumeMounts:<br />
- name: hostvol<br />
mountPath: /usr/share/nginx/html<br />
volumes:<br />
- name: hostvol<br />
hostPath:<br />
path: /home/docker/vol<br />
</pre><br />
<br />
And use the same Service:<br />
$ cat webserver-svc.yml<br />
<pre><br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
name: web-service<br />
labels:<br />
run: web-service<br />
spec:<br />
type: NodePort<br />
ports:<br />
- port: 80<br />
protocol: TCP<br />
selector:<br />
app: webserver<br />
</pre><br />
<br />
Then create the deployment and service:<br />
$ kubectl create -f webserver.yml<br />
$ kubectl create -f webserver-svc.yml<br />
<br />
Then, SSH into the webserver and run the following commands<br />
$ minikube ssh<br />
minikube> mkdir -p /home/docker/vol<br />
minikube> echo "Christoph testing" > /home/docker/vol/index.html<br />
minikube> exit<br />
<br />
Get the webserver IP and port:<br />
$ minikube ip<br />
192.168.99.100<br />
$ kubectl get svc/web-service -o json | jq '.spec.ports[].nodePort'<br />
32610<br />
# OR<br />
$ minikube service web-service --url<br />
<nowiki>http://192.168.99.100:32610</nowiki><br />
<br />
$ curl <nowiki>http://192.168.99.100:32610</nowiki><br />
Christoph testing<br />
<br />
; Example #2 - Using NFS<br />
<br />
* First, create a server to host your NFS server (e.g., <code>`sudo apt-get install -y nfs-kernel-server`</code>).<br />
* On your NFS server, do the following:<br />
$ mkdir -p /var/nfs/general<br />
$ cat << EOF >>/etc/exports<br />
/var/nfs/general 10.100.1.2(rw,sync,no_subtree_check) 10.100.1.3(rw,sync,no_subtree_check) 10.100.1.4(rw,sync,no_subtree_check)<br />
EOF<br />
where the <code>10.x</code> IPs are the private IPs of your k8s nodes (both Master and Worker nodes).<br />
* Make sure to install <code>nfs-common</code> on each of the k8s nodes that will be connecting to the NFS server.<br />
<br />
Now, on the k8s Master node, create a Persistent Volume (PV) and Persistent Volume Claim (PVC):<br />
<br />
* Create a Persistent Volume (PV):<br />
$ cat << EOF >pv.yml<br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 10.100.1.10 # NFS Server's private IP<br />
readOnly: false<br />
EOF<br />
$ kubectl create --validate -f pv.yml<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Available<br />
* Create a Persistent Volume Claim (PVC):<br />
$ cat << EOF >pvc.yml<br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
EOF<br />
$ kubectl create --validate -f pvc.yml<br />
$ kubectl get pvc<br />
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE<br />
nfs-pvc Bound mypv 1Gi RWX<br />
$ kubectl get pv<br />
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br />
mypv 1Gi RWX Recycle Bound default/nfs-pvc 11m<br />
<br />
* Create a Pod:<br />
$ cat << EOF >nfs-pod.yml <br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nfs-pod<br />
labels:<br />
name: nfs-pod<br />
spec:<br />
containers:<br />
- name: nfs-ctn<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
volumeMounts:<br />
- name: nfsvol<br />
mountPath: /tmp<br />
restartPolicy: Always<br />
securityContext:<br />
fsGroup: 65534<br />
runAsUser: 65534<br />
volumes:<br />
- name: nfsvol<br />
persistentVolumeClaim:<br />
claimName: nfs-pvc<br />
EOF<br />
$ kubectl create --validate -f nfs-pod.yml<br />
$ kubectl get pods -o wide<br />
NAME READY STATUS RESTARTS AGE IP NODE<br />
busybox 1/1 Running 9 2d 10.244.2.22 k8s.worker01.local<br />
<br />
* Get a shell from the <code>nfs-pod</code> Pod:<br />
$ kubectl exec -it nfs-pod -- sh<br />
/ $ df -h<br />
Filesystem Size Used Available Use% Mounted on<br />
172.31.119.58:/var/nfs/general<br />
19.3G 1.8G 17.5G 9% /tmp<br />
...<br />
/ $ touch /tmp/this-is-from-the-pod<br />
<br />
* On the NFS server:<br />
$ ls -l /var/nfs/general/<br />
total 0<br />
-rw-r--r-- 1 nobody nogroup 0 Jan 18 23:32 this-is-from-the-pod<br />
<br />
It works!<br />
<br />
==ConfigMaps and Secrets==<br />
While deploying an application, we may need to pass such runtime parameters like configuration details, passwords, etc. For example, let's assume we need to deploy ten different applications for our customers, and, for each customer, we just need to change the name of the company in the UI. Instead of creating ten different Docker images for each customer, we can just use the template image and pass the customers' names as a runtime parameter. In such cases, we can use the ConfigMap API resource. Similarly, when we want to pass sensitive information, we can use the Secret API resource. Think ''Secrets'' (for confidential data) and ''ConfigMaps'' (for non-confidential data).<br />
<br />
[https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/ ConfigMaps] allow you to decouple configuration artifacts from image content to keep containerized applications portable. Using ConfigMaps, we can pass configuration details as key-value pairs, which can be later consumed by Pods or any other system components, such as controllers. We can create ConfigMaps in two ways:<br />
<br />
* From literal values; and<br />
* From files.<br />
<br />
<br />
;ConfigMaps<br />
<br />
* Create a ConfigMap:<br />
$ kubectl create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2<br />
configmap "my-config" created<br />
$ kubectl get configmaps my-config -o yaml<br />
<pre><br />
apiVersion: v1<br />
data:<br />
key1: value1<br />
key2: value2<br />
kind: ConfigMap<br />
metadata:<br />
creationTimestamp: 2018-01-11T23:57:44Z<br />
name: my-config<br />
namespace: default<br />
resourceVersion: "117110"<br />
selfLink: /api/v1/namespaces/default/configmaps/my-config<br />
uid: 37a43e39-f72b-11e7-8370-08002721601f<br />
</pre><br />
$ kubectl describe configmap/my-config<br />
<pre><br />
Name: my-config<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Data<br />
====<br />
key2:<br />
----<br />
value2<br />
key1:<br />
----<br />
value1<br />
Events: <none><br />
</pre><br />
<br />
; Create a ConfigMap from a configuration file<br />
<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: ConfigMap<br />
metadata:<br />
name: customer1<br />
data:<br />
TEXT1: Customer1_Company<br />
TEXT2: Welcomes You<br />
COMPANY: Customer1 Company Technology, LLC.<br />
EOF<br />
</pre><br />
<br />
We can get the values of the given key as environment variables inside a Pod. In the following example, while creating the Deployment, we are assigning values for environment variables from the customer1 ConfigMap:<br />
<pre><br />
....<br />
containers:<br />
- name: my-app<br />
image: foobar<br />
env:<br />
- name: MONGODB_HOST<br />
value: mongodb<br />
- name: TEXT1<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT1<br />
- name: TEXT2<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: TEXT2<br />
- name: COMPANY<br />
valueFrom:<br />
configMapKeyRef:<br />
name: customer1<br />
key: COMPANY<br />
....<br />
</pre><br />
With the above, we will get the <code>TEXT1</code> environment variable set to <code>Customer1_Company</code>, <code>TEXT2</code> environment variable set to <code>Welcomes You</code>, and so on.<br />
<br />
We can also mount a ConfigMap as a Volume inside a Pod. For each key, we will see a file in the mount path and the content of that file become the respective key's value. For details, see [https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#adding-configmap-data-to-a-volume here].<br />
<br />
You can also use ConfigMaps to configure your cluster to use, as an example, 8.8.8.8 and 8.8.4.4 as its upstream DNS server:<br />
<pre><br />
kind: ConfigMap<br />
apiVersion: v1<br />
metadata:<br />
name: kube-dns<br />
namespace: kube-system<br />
data:<br />
upstreamNameservers: |<br />
["8.8.8.8", "8.8.4.4"]<br />
</pre><br />
<br />
; Secrets<br />
<br />
Objects of type [https://kubernetes.io/docs/concepts/configuration/secret/ Secret] are intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a Secret is safer and more flexible than putting it verbatim in a pod definition or in a docker image.<br />
<br />
As an example, assume that we have a Wordpress blog application, in which our <code>wordpress</code> frontend connects to the [[MySQL]] database backend using a password. While creating the Deployment for <code>wordpress</code>, we can put the MySQL password in the Deployment's YAML file, but the password would not be protected. The password would be available to anyone who has access to the configuration file.<br />
<br />
In situations such as the one we just mentioned, the Secret object can help. With Secrets, we can share sensitive information like passwords, tokens, or keys in the form of key-value pairs, similar to ConfigMaps; thus, we can control how the information in a Secret is used, reducing the risk for accidental exposures. In Deployments or other system components, the Secret object is ''referenced'', without exposing its content.<br />
<br />
It is important to keep in mind that the Secret data is stored as plain text inside etcd. Administrators must limit the access to the API Server and etcd.<br />
<br />
To create a Secret using the <code>`kubectl create secret`</code> command, we need to first create a file with a password, and then pass it as an argument.<br />
<br />
* Create a file with your MySQL password:<br />
$ echo mysqlpasswd | tr -d '\n' > password.txt<br />
<br />
* Create the ''Secret'':<br />
$ kubectl create secret generic mysql-passwd --from-file=password.txt<br />
$ kubectl describe secret/mysql-passwd<br />
<pre><br />
Name: mysql-passwd<br />
Namespace: default<br />
Labels: <none><br />
Annotations: <none><br />
<br />
Type: Opaque<br />
<br />
Data<br />
====<br />
password.txt: 11 bytes<br />
</pre><br />
<br />
We can also create a Secret manually, using the YAML configuration file. With Secrets, each object data must be encoded using base64. If we want to have a configuration file for our Secret, we must first get the base64 encoding for our password:<br />
<br />
$ cat password.txt | base64<br />
bXlzcWxwYXNzd2Q==<br />
<br />
and then use it in the configuration file:<br />
<pre><br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysql-passwd<br />
type: Opaque<br />
data:<br />
password: bXlzcWxwYXNzd2Q=<br />
</pre><br />
Note that base64 encoding does not do any encryption and anyone can easily decode it:<br />
<br />
$ echo "bXlzcWxwYXNzd2Q=" | base64 -d # => mysqlpasswd<br />
<br />
Therefore, make sure you do not commit a Secret's configuration file in the source code.<br />
<br />
We can get Secrets to be used by containers in a Pod by mounting them as data volumes, or by exposing them as environment variables.<br />
<br />
We can reference a Secret and assign the value of its key as an environment variable (<code>WORDPRESS_DB_PASSWORD</code>):<br />
<pre><br />
.....<br />
spec:<br />
containers:<br />
- image: wordpress:4.7.3-apache<br />
name: wordpress<br />
env:<br />
- name: WORDPRESS_DB_HOST<br />
value: wordpress-mysql<br />
- name: WORDPRESS_DB_PASSWORD<br />
valueFrom:<br />
secretKeyRef:<br />
name: my-password<br />
key: password.txt<br />
.....<br />
</pre><br />
<br />
Or, we can also mount a Secret as a Volume inside a Pod. A file would be created for each key mentioned in the Secret, whose content would be the respective value. See [https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod here] for details.<br />
<br />
==Ingress==<br />
Among the ServiceTypes mentioned earlier, NodePort and LoadBalancer are the most often used. For the LoadBalancer ServiceType, we need to have the support from the underlying infrastructure. Even after having the support, we may not want to use it for every Service, as LoadBalancer resources are limited and they can increase costs significantly. Managing the NodePort ServiceType can also be tricky at times, as we need to keep updating our proxy settings and keep track of the assigned ports. In this section, we will explore the Ingress API object, which is another method we can use to access our applications from the external world.<br />
<br />
An ''[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingress]'' is a collection of rules that allow inbound connections to reach the cluster Services. With Services, routing rules are attached to a given Service. They exist for as long as the Service exists. If we can somehow decouple the routing rules from the application, we can then update our application without worrying about its external access. This can be done using the Ingress resource. Ingress can provide load balancing, SSL/TLS termination, and name-based virtual hosting and/or routing.<br />
<br />
To allow the inbound connection to reach the cluster Services, Ingress configures a Layer 7 HTTP load balancer for Services and provides the following:<br />
<br />
* TLS (Transport Layer Security)<br />
* Name-based virtual hosting <br />
* Path-based routing<br />
* Custom rules.<br />
<br />
With Ingress, users do not connect directly to a Service. Users reach the Ingress endpoint, and, from there, the request is forwarded to the respective Service. You can see an example of an example Ingress definition below:<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Ingress<br />
metadata:<br />
name: web-ingress<br />
spec:<br />
rules:<br />
- host: blue.example.com<br />
http:<br />
paths:<br />
- backend: <br />
serviceName: blue-service<br />
servicePort: 80<br />
- host: green.example.com<br />
http:<br />
paths:<br />
- backend:<br />
serviceName: green-service<br />
servicePort: 80<br />
</pre><br />
<br />
According to the example just provided, users requests to both <code>blue.example.com</code> and <code>green.example.com</code> would go to the same Ingress endpoint, and, from there, they would be forwarded to <code>blue-service</code>, and <code>green-service</code>, respectively. Here, we have seen an example of a Name-Based Virtual Hosting Ingress rule. <br />
<br />
We can also have Fan Out Ingress rules, in which we send requests like <code>example.com/blue</code> and <code>example.com/green</code>, which would be forwarded to <code>blue-service</code> and <code>green-service</code>, respectively.<br />
<br />
To secure an Ingress, you must create a ''Secret''. The TLS secret must contain keys named <code>tls.crt</code> and <code>tls.key</code>, which contain the certificate and private key to use for TLS.<br />
<br />
The Ingress resource does not do any request forwarding by itself. All of the magic is done using the ''Ingress Controller''.<br />
<br />
; Ingress Controller<br />
<br />
An Ingress Controller is an application which watches the Master Node's API Server for changes in the Ingress resources and updates the Layer 7 load balancer accordingly. Kubernetes has different Ingress Controllers, and, if needed, we can also build our own. GCE L7 Load Balancer and Nginx Ingress Controller are examples of Ingress Controllers.<br />
<br />
Minikube v0.14.0 and above ships the Nginx Ingress Controller setup as an add-on. It can be easily enabled by running the following command:<br />
<br />
$ minikube addons enable ingress<br />
<br />
Once the Ingress Controller is deployed, we can create an Ingress resource using the <code>kubectl create</code> command. For example, if we create an <code>example-ingress.yml</code> file with the content above, then, we can use the following command to create an Ingress resource:<br />
<br />
$ kubectl create -f example-ingress.yml<br />
<br />
With the Ingress resource we just created, we should now be able to access the blue-service or green-service services using blue.example.com and green.example.com URLs. As our current setup is on minikube, we will need to update the host configuration file on our workstation to the minikube's IP for those URLs:<br />
<br />
$ cat /etc/hosts<br />
127.0.0.1 localhost<br />
::1 localhost<br />
192.168.99.100 blue.example.com green.example.com <br />
<br />
Once this is done, we can now open blue.example.com and green.example.com in a browser and access the application.<br />
<br />
==Labels and Selectors==<br />
''[https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ Labels]'' are key-value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key-value labels defined. Each key must be unique for a given object.<br />
<pre><br />
"labels": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
<br />
;Syntax and character set<br />
<br />
Labels are key-value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (<code>/</code>). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (<code>.</code>), not longer than 253 characters in total, followed by a slash (<code>/</code>). If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. kube-scheduler, kube-controller-manager, kube-apiserver, kubectl, or other third-party automation) which add labels to end-user objects must specify a prefix. The <code>kubernetes.io/</code> prefix is reserved for Kubernetes core components.<br />
<br />
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (<code>[a-z0-9A-Z]</code>) with dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between.<br />
<br />
;Label selectors<br />
<br />
Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).<br />
<br />
Via a label selector, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.<br />
<br />
The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical AND (<code>&&</code>) operator.<br />
<br />
An empty label selector (that is, one with zero requirements) selects every object in the collection.<br />
<br />
A null label selector (which is only possible for optional selector fields) selects no objects.<br />
<br />
Note: the label selectors of two controllers must not overlap within a namespace, otherwise they will fight with each other.<br />
Note that labels are not restricted to pods. You can apply them to all sorts of objects, such as nodes or services.<br />
<br />
;Examples<br />
<br />
* Label a given node:<br />
$ kubectl label node k8s.worker1.local network=gigabit<br />
<br />
* With ''Equality-based'', one may write:<br />
$ kubectl get pods -l environment=production,tier=frontend<br />
<br />
* Using ''set-based'' requirements:<br />
$ kubectl get pods -l 'environment in (production),tier in (frontend)'<br />
<br />
* Implement the OR operator on values:<br />
$ kubectl get pods -l 'environment in (production, qa)'<br />
<br />
* Restricting negative matching via exists operator:<br />
$ kubectl get pods -l 'environment,environment notin (frontend)'<br />
<br />
* Show the current labels on your pods:<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Add a label to an already running/existing pod:<br />
$ kubectl label pods busybox owner=christoph<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d owner=christoph<br />
nfs-pod 1/1 Running 16 6d name=nfs-pod<br />
<br />
* Select a pod by its label:<br />
$ kubectl get pods --selector owner=christoph<br />
#~OR~<br />
$ kubectl get pods -l owner=christoph<br />
NAME READY STATUS RESTARTS AGE<br />
busybox 1/1 Running 25 9d<br />
<br />
* Delete/remove a given label from a given pod:<br />
$ kubectl label pod busybox owner-<br />
pod "busybox" labeled<br />
$ kubectl get pods --show-labels<br />
NAME READY STATUS RESTARTS AGE LABELS<br />
busybox 1/1 Running 25 9d <none><br />
<br />
* Get all pods that belong to both the <code>production</code> ''and'' the <code>development</code> environments:<br />
$ kubectl get pods -l 'env in (production, development)'<br />
<br />
; Using Labels to select a Node on which to schedule a Pod:<br />
<br />
* Label a Node that uses SSDs as its primary HDD:<br />
$ kubectl label node k8s.worker1.local hdd=ssd<br />
<br />
<pre><br />
$ cat << EOF >busybox.yml<br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "300"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
nodeSelector: <br />
hdd: ssd<br />
EOF<br />
</pre><br />
<br />
==Annotations==<br />
With ''[https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ Annotations]'', we can attach arbitrary, non-identifying metadata to objects, in a key-value format:<br />
<br />
<pre><br />
"annotations": {<br />
"key1" : "value1",<br />
"key2" : "value2"<br />
}<br />
</pre><br />
The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.<br />
<br />
In contrast to Labels, annotations are not used to identify and select objects. Annotations can be used to:<br />
<br />
* Store build/release IDs, which git branch, etc.<br />
* Phone numbers of persons responsible or directory entries specifying where such information can be found<br />
* Pointers to logging, monitoring, analytics, audit repositories, debugging tools, etc.<br />
* Etc.<br />
<br />
For example, while creating a Deployment, we can add a description like the one below:<br />
<br />
<pre><br />
apiVersion: extensions/v1beta1<br />
kind: Deployment<br />
metadata:<br />
name: webserver<br />
annotations:<br />
description: Deployment based PoC dates 12 January 2018<br />
....<br />
....<br />
</pre><br />
<br />
We can look at annotations while describing an object:<br />
<br />
<pre><br />
$ kubectl describe deployment webserver<br />
Name: webserver<br />
Namespace: default<br />
CreationTimestamp: Fri, 12 Jan 2018 13:18:23 -0800<br />
Labels: app=webserver<br />
Annotations: deployment.kubernetes.io/revision=1<br />
description=Deployment based PoC dates 12 January 2018<br />
...<br />
...<br />
</pre><br />
<br />
==Jobs and CronJobs==<br />
<br />
===Jobs===<br />
A ''[https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#what-is-a-job Job]'' creates one or more pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the Job itself is complete. Deleting a Job will cleanup the pods it created.<br />
<br />
A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).<br />
<br />
A Job can also be used to run multiple Pods in parallel.<br />
<br />
; Example<br />
<br />
* Below is an example ''Job'' config. It computes π to 2000 places and prints it out. It takes around 10 seconds to complete.<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
$ kubctl create -f ./job-pi.yml<br />
job "pi" created<br />
$ kubectl describe jobs/pi<br />
<pre><br />
Name: pi<br />
Namespace: default<br />
Selector: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Annotations: <none><br />
Parallelism: 1<br />
Completions: 1<br />
Start Time: Fri, 12 Jan 2018 13:25:23 -0800<br />
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed<br />
Pod Template:<br />
Labels: controller-uid=19aa42d0-f7df-11e7-8370-08002721601f<br />
job-name=pi<br />
Containers:<br />
pi:<br />
Image: perl<br />
Port: <none><br />
Command:<br />
perl<br />
-Mbignum=bpi<br />
-wle<br />
print bpi(2000)<br />
Environment: <none><br />
Mounts: <none><br />
Volumes: <none><br />
Events:<br />
Type Reason Age From Message<br />
---- ------ ---- ---- -------<br />
Normal SuccessfulCreate 8s job-controller Created pod: pi-rfvvw<br />
</pre><br />
<br />
* Get the result of the Job run (i.e., the value of π):<br />
$ pods=$(kubectl get pods --show-all --selector=job-name=pi --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
pi-rfvvw<br />
$ kubectl logs ${pods}<br />
3.1415926535897932384626433832795028841971693...<br />
<br />
===CronJobs===<br />
<br />
Support for creating ''Jobs'' at specified times/dates (i.e. cron) is available in Kubernetes 1.4. See [https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ here] for details.<br />
<br />
Below is an example ''CronJob''. Every minute, it runs a simple job to print current time and then echo a "hello" string:<br />
$ cat << EOF >cronjob.yml<br />
apiVersion: batch/v1beta1<br />
kind: CronJob<br />
metadata:<br />
name: hello<br />
spec:<br />
schedule: "*/1 * * * *"<br />
jobTemplate:<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: hello<br />
image: busybox<br />
args:<br />
- /bin/sh<br />
- -c<br />
- date; echo Hello from the Kubernetes cluster<br />
restartPolicy: OnFailure<br />
EOF<br />
<br />
$ kubectl create -f cronjob.yml<br />
cronjob "hello" created<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 <none> 11s<br />
<br />
$ kubectl get jobs --watch<br />
NAME DESIRED SUCCESSFUL AGE<br />
hello-1515793140 1 1 7s<br />
<br />
$ kubectl get cronjob hello<br />
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE<br />
hello */1 * * * * False 0 22s 48s<br />
<br />
$ pods=$(kubectl get pods -a --selector=job-name=hello-1515793140 --output=jsonpath={.items..metadata.name})<br />
$ echo $pods<br />
hello-1515793140-plp8g<br />
<br />
$ kubectl logs $pods<br />
Fri Jan 12 21:39:07 UTC 2018<br />
Hello from the Kubernetes cluster<br />
<br />
* Cleanup<br />
$ kubectl delete cronjob hello<br />
<br />
==Quota Management==<br />
When there are many users sharing a given Kubernetes cluster, there is always a concern for fair usage. To address this concern, administrators can use the ''[https://kubernetes.io/docs/concepts/policy/resource-quotas/ ResourceQuota]'' object, which provides constraints that limit aggregate resource consumption per Namespace.<br />
<br />
We can have the following types of quotas per Namespace:<br />
<br />
* Compute Resource Quota: We can limit the total sum of compute resources (CPU, memory, etc.) that can be requested in a given Namespace.<br />
* Storage Resource Quota: We can limit the total sum of storage resources (PersistentVolumeClaims, requests.storage, etc.) that can be requested.<br />
* Object Count Quota: We can restrict the number of objects of a given type (pods, ConfigMaps, PersistentVolumeClaims, ReplicationControllers, Services, Secrets, etc.).<br />
<br />
==Daemon Sets==<br />
In some cases, like collecting monitoring data from all nodes, or running a storage daemon on all nodes, etc., we need a specific type of Pod running on all nodes at all times. A ''[https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ DaemonSet]'' is the object that allows us to do just that. <br />
<br />
Whenever a node is added to the cluster, a Pod from a given DaemonSet is created on it. When the node dies, the respective Pods are garbage collected. If a DaemonSet is deleted, all Pods it created are deleted as well.<br />
<br />
Example DaemonSet:<br />
<pre><br />
kind: DaemonSet<br />
apiVersion: apps/v1<br />
metadata:<br />
name: pause-ds<br />
spec:<br />
selector:<br />
matchLabels:<br />
quiet: "pod"<br />
template:<br />
metadata:<br />
labels:<br />
quiet: pod<br />
spec:<br />
tolerations:<br />
- key: node-role.kubernetes.io/master<br />
effect: NoSchedule<br />
containers:<br />
- name: pause-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Stateful Sets==<br />
The ''[https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ StatefulSet]'' controller is used for applications which require a unique identity, such as name, network identifications, strict ordering, etc. For example, MySQL cluster, etcd cluster.<br />
<br />
The StatefulSet controller provides identity and guaranteed ordering of deployment and scaling to Pods.<br />
<br />
Note: Before Kubernetes 1.5, the StatefulSet controller was referred to as ''PetSet''.<br />
<br />
==Role Based Access Control (RBAC)==<br />
''[https://kubernetes.io/docs/admin/authorization/rbac/ Role-based access control]'' (RBAC) is an authorization mechanism for managing permissions around Kubernetes resources.<br />
<br />
Using the RBAC API, we define a role which contains a set of additive permissions. Within a Namespace, a role is defined using the Role object. For a cluster-wide role, we need to use the ClusterRole object.<br />
<br />
Once the roles are defined, we can bind them to a user or a set of users using ''RoleBinding'' and ''ClusterRoleBinding''.<br />
<br />
===Using RBAC with minikube===<br />
<br />
* Start up minikube with RBAC support:<br />
$ minikube start --kubernetes-version=v1.9.0 --extra-config=apiserver.Authorization.Mode=RBAC<br />
<br />
* Setup RBAC:<br />
<pre><br />
$ cat rbac-cluster-role-binding.yml<br />
# kubectl create clusterrolebinding add-on-cluster-admin \<br />
# --clusterrole=cluster-admin --serviceaccount=kube-system:default<br />
#<br />
kind: ClusterRoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1alpha1<br />
metadata:<br />
name: kube-system-sa<br />
subjects:<br />
- kind: Group<br />
name: system:sericeaccounts:kube-system<br />
roleRef:<br />
kind: ClusterRole<br />
name: cluster-admin<br />
apiGroup: rbac.authorization.k8s.io<br />
</pre><br />
<br />
<pre><br />
$ cat rbac-setup.yml <br />
apiVersion: v1<br />
kind: Namespace<br />
metadata:<br />
name: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: viewer<br />
namespace: rbac<br />
<br />
---<br />
apiVersion: v1<br />
kind: ServiceAccount<br />
metadata:<br />
name: admin<br />
namespace: rbac<br />
</pre><br />
<br />
* Create a Role Binding:<br />
<pre><br />
# kubectl create rolebinding reader-binding \<br />
# --clusterrole=reader \<br />
# --user=serviceaccount:reader \<br />
# --namespace:rbac<br />
#<br />
kind: RoleBinding<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: rbac<br />
name: reader-binding<br />
roleRef:<br />
apiGroup: rbac.authorization.k8s.io<br />
kind: Role<br />
name: reader<br />
subjects:<br />
- apiGroup: rbac.authorization.k8s.io<br />
kind: ServiceAccount<br />
name: reader<br />
</pre><br />
<br />
* Create a Role:<br />
<pre><br />
$ cat rbac-role.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
namespace: default<br />
name: reader<br />
rules:<br />
- apiGroups: [""]<br />
resources: ["*"]<br />
verbs: ["get", "watch", "list"]<br />
</pre><br />
<br />
* Create an RBAC "core reader" Role with specific resources and "verbs" (i.e., the "core reader" role can "get"/"list"/etc. on specific resources (e.g., Pods, Jobs, Deployments, etc.):<br />
<pre><br />
$ cat rbac-role-core-reader.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: core-reader<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods<br />
- configmaps<br />
- secrets<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- jobs<br />
- deployments<br />
verbs:<br />
- get<br />
- watch<br />
- list<br />
</pre><br />
<br />
* "Gotchas":<br />
<pre><br />
$ cat rbac-gotcha-1.yml<br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-1<br />
rules:<br />
- nonResourceURLs:<br />
- /healthz<br />
verbs:<br />
- get<br />
- post<br />
- apiGroups:<br />
- batch<br />
- extensions<br />
resources:<br />
- deployments<br />
verbs:<br />
- "*"<br />
</pre><br />
<pre><br />
$ cat rbac-gotcha-2.yml <br />
kind: Role<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: gotcha-2<br />
rules:<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- secrets<br />
verbs:<br />
- "*"<br />
resourceNames:<br />
- "my_secret"<br />
- apiGroups:<br />
- ""<br />
resources:<br />
- pods/logs<br />
verbs:<br />
- "get"<br />
</pre><br />
<br />
; Privilege escalation<br />
* You cannot create a Role or ClusterRole that grants permissions you do not have.<br />
* You cannot create a RoleBinding or ClusterRoleBinding that binds to a Role with permissions you do not have (unless you have been explicitly given "bind" permission on the role).<br />
<br />
* Grant explicit bind access:<br />
<pre><br />
kind: ClusterRole<br />
apiVersion: rbac.authorization.k8s.io/v1beta1<br />
metadata:<br />
name: role-grantor<br />
rules:<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["rolebindings"]<br />
verbs: ["create"]<br />
- apiGroups: ["rbac.authorization.k8s.io"]<br />
resources: ["clusterroles"]<br />
verbs: ["bind"]<br />
resourceNames: ["admin", "edit", "view"]<br />
</pre><br />
<br />
===Testing RBAC permissions===<br />
<br />
* Example of RBAC not allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
no - Required "container.pods.create" permission.<br />
</pre><br />
<br />
* Example of RBAC allowing a verb-noun:<br />
<pre><br />
$ kubectl auth can-i create pods<br />
yes<br />
</pre><br />
<br />
* A more complex example:<br />
<pre><br />
$ kubectl auth can-i update deployments.apps \<br />
--subresource="scale" --as-group="$group" --as="$user" -n $ns<br />
</pre><br />
<br />
==Federation==<br />
With the ''[https://kubernetes.io/docs/concepts/cluster-administration/federation/ Kubernetes Cluster Federation]'' we can manage multiple Kubernetes clusters from a single control plane. We can sync resources across the clusters, and have cross cluster discovery. This allows us to do Deployments across regions and access them using a global DNS record.<br />
<br />
Federation is very useful when we want to build a hybrid solution, in which we can have one cluster running inside our private datacenter and another one on the public cloud. We can also assign weights for each cluster in the Federation, to distribute the load as per our choice.<br />
<br />
==Helm==<br />
To deploy an application, we use different Kubernetes manifests, such as Deployments, Services, Volume Claims, Ingress, etc. Sometimes, it can be tiresome to deploy them one by one. We can bundle all those manifests after templatizing them into a well-defined format, along with other metadata. Such a bundle is referred to as ''Chart''. These Charts can then be served via repositories, such as those that we have for rpm and deb packages. <br />
<br />
''[https://github.com/kubernetes/helm Helm]'' is a package manager (analogous to yum and apt) for Kubernetes, which can install/update/delete those Charts in the Kubernetes cluster.<br />
<br />
Helm has two components:<br />
<br />
* A client called helm, which runs on your user's workstation; and<br />
* A server called tiller, which runs inside your Kubernetes cluster.<br />
<br />
The client helm connects to the server tiller to manage Charts. Charts submitted for Kubernetes are available [https://github.com/kubernetes/charts here].<br />
<br />
==Monitoring and logging==<br />
In Kubernetes, we have to collect resource usage data by Pods, Services, nodes, etc, to understand the overall resource consumption and to take decisions for scaling a given application. Two popular Kubernetes monitoring solutions are Heapster and Prometheus.<br />
<br />
[https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/ Heapster] is a cluster-wide aggregator of monitoring and event data, which is natively supported on Kubernetes. <br />
<br />
[https://prometheus.io/ Prometheus], now part of [https://www.cncf.io/ CNCF] (Cloud Native Computing Foundation), can also be used to scrape the resource usage from different Kubernetes components and objects. Using its client libraries, we can also instrument the code of our application.<br />
<br />
Another important aspect for troubleshooting and debugging is Logging, in which we collect the logs from different components of a given system. In Kubernetes, we can collect logs from different cluster components, objects, nodes, etc. The most common way to collect the logs is using [https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/ Elasticsearch], which uses [https://www.fluentd.org/ fluentd] with custom configuration as an agent on the nodes. fluentd is an open source data collector, which is also part of CNCF.<br />
<br />
[https://github.com/google/cadvisor cAdvisor] is an open source container resource usage and performance analysis agent. It auto-discovers all containers on a node and collects CPU, memory, file system, and network usage statistics. It provides overall machine usage by analyzing the "root" container on the machine. It exposes a simple UI for local containers on port 4194.<br />
<br />
==Security==<br />
===Configure network policies===<br />
A ''[https://kubernetes.io/docs/concepts/services-networking/network-policies/ Network Policy]'' is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.<br />
<br />
''NetworkPolicy'' resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.<br />
<br />
* Specification of how groups of pods may communicate<br />
* Use labels to select pods and define rules<br />
* Implemented by the network plugin<br />
* Pods are non-isolated by default<br />
* Pods are isolated when a Network Policy selects them<br />
<br />
;Example NetworkPolicy<br />
Create a "default" isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods:<br />
<pre><br />
apiVersion: networking.k8s.io/v1<br />
kind: NetworkPolicy<br />
metadata:<br />
name: default-deny<br />
spec:<br />
podSelector: {}<br />
policyTypes:<br />
- Ingress<br />
</pre><br />
<br />
===TLS certificates for cluster components===<br />
Get [https://github.com/OpenVPN/easy-rsa easy-rsa].<br />
<br />
$ ./easyrsa init-pki<br />
$ MASTER_IP=10.100.1.2<br />
$ ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass<br />
<br />
$ cat rsa-request.sh<br />
<pre><br />
#!/bin/bash<br />
./easyrsa --subject-alt-name="IP:${MASTER_IP}," \<br />
"DNS:kubernetes," \<br />
"DNS:kubernetes.default," \<br />
"DNS:kubernetes.default.svc," \<br />
"DNS:kubernetes.default.svc.cluster," \<br />
"DNS:kubernetes.default.svc.cluster.local" \<br />
--days=10000 \<br />
build-server-full server nopass<br />
</pre><br />
<br />
<pre><br />
pki/<br />
├── ca.crt<br />
├── certs_by_serial<br />
│ └── F3A6F7D34BC84330E7375FA20C8441DF.pem<br />
├── index.txt<br />
├── index.txt.attr<br />
├── index.txt.old<br />
├── issued<br />
│ └── server.crt<br />
├── private<br />
│ ├── ca.key<br />
│ └── server.key<br />
├── reqs<br />
│ └── server.req<br />
├── serial<br />
└── serial.old<br />
</pre><br />
<br />
* Figure out what are the paths of the old TLS certs/keys with the following command:<br />
<pre><br />
$ ps aux | grep [a]piserver | sed -n -e 's/^.*\(kube-apiserver \)/\1/p' | tr ' ' '\n'<br />
kube-apiserver<br />
--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota<br />
--requestheader-extra-headers-prefix=X-Remote-Extra-<br />
--advertise-address=172.31.118.138<br />
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt<br />
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt<br />
--requestheader-username-headers=X-Remote-User<br />
--service-cluster-ip-range=10.96.0.0/12<br />
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key<br />
--secure-port=6443<br />
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key<br />
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname<br />
--requestheader-group-headers=X-Remote-Group<br />
--requestheader-allowed-names=front-proxy-client<br />
--service-account-key-file=/etc/kubernetes/pki/sa.pub<br />
--insecure-port=0<br />
--enable-bootstrap-token-auth=true<br />
--allow-privileged=true<br />
--client-ca-file=/etc/kubernetes/pki/ca.crt<br />
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt<br />
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key<br />
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt<br />
--authorization-mode=Node,RBAC<br />
--etcd-servers=http://127.0.0.1:2379<br />
</pre><br />
<br />
===Security Contexts===<br />
A ''[https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Security Context]'' defines privilege and access control settings for a Pod or Container. Security context settings include:<br />
<br />
* Discretionary Access Control: Permission to access an object, like a file, is based on user ID (UID) and group ID (GID).<br />
* Security Enhanced Linux (SELinux): Objects are assigned security labels.<br />
* Running as privileged or unprivileged.<br />
* Linux Capabilities: Give a process some privileges, but not all the privileges of the root user.<br />
* AppArmor: Use program profiles to restrict the capabilities of individual programs.<br />
* Seccomp: Limit a process's access to open file descriptors.<br />
* AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This boolean directly controls whether the <code>no_new_privs</code> flag gets set on the container process. <code>AllowPrivilegeEscalation</code> is true always when the container is: 1) run as Privileged; or 2) has <code>CAP_SYS_ADMIN</code>.<br />
<br />
; Example #1<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: security-context-demo<br />
spec:<br />
securityContext:<br />
runAsUser: 1000<br />
fsGroup: 2000<br />
volumes:<br />
- name: sec-ctx-vol<br />
emptyDir: {}<br />
containers:<br />
- name: sec-ctx-demo<br />
image: gcr.io/google-samples/node-hello:1.0<br />
volumeMounts:<br />
- name: sec-ctx-vol<br />
mountPath: /data/demo<br />
securityContext:<br />
allowPrivilegeEscalation: false<br />
</pre><br />
<br />
==Taints and tolerations==<br />
[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature Node affinity] is a property of pods that ''attracts'' them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite – they allow a node to ''repel'' a set of pods.<br />
<br />
[https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ Taints and tolerations] work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks the node such that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.<br />
<br />
==Remove a node from a cluster==<br />
<br />
* On the k8s Master Node:<br />
k8s-master> $ kubectl drain k8s-worker-02 --ignore-daemonsets<br />
<br />
* On the k8s Worker Node (the one you wish to remove from the cluster):<br />
k8s-worker-02> $ kubeadm reset<br />
[preflight] Running pre-flight checks.<br />
[reset] Stopping the kubelet service.<br />
[reset] Unmounting mounted directories in "/var/lib/kubelet"<br />
[reset] Removing kubernetes-managed containers.<br />
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.<br />
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]<br />
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]<br />
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]<br />
<br />
==Networking==<br />
<br />
; Useful network ranges<br />
* Choose ranges for the Pods and Service CIDR blocks<br />
* Generally, any of the RFC-1918 ranges work well<br />
** 10.0.0.0/8<br />
** 172.0.0.0/11<br />
** 192.168.0.0/16<br />
<br />
Every Pod can communicate directly with every other Pod<br />
<br />
;K8s Node<br />
* A general purpose compute that has at least one interface<br />
** The host OS will have a real-world IP for accessing the machine<br />
** K8s Pods are given ''virtual'' interfaces connected to an internal<br />
** Each nodes has a running network stack<br />
* Kube-proxy runs in the OS to control IPtables for:<br />
** Services<br />
** NodePorts<br />
<br />
;Networking substrate<br />
* Most k8s network stacks allocate subnets for each node<br />
** The network stack is responsible for arbitration of subnets and IPs<br />
** The network stack is also responsible for moving packets around the network<br />
* Pods have a unique, routable IP on the Pod CIDR block<br />
** The CIDR block is ''not'' accessed from outside the k8s cluster<br />
** The magic of IPtables allows the Pods to make outgoing connections<br />
* Ensure that k8s has the correct Pods and Service CIDR blocks<br />
<br />
The Pod network is not seen on the physical network (i.e., it is encapsulated; you will not be able to use <code>tcpdump</code> on it from the physical network)<br />
<br />
;Making the setup easier &mdash; CNI<br />
* Use the Container Network Interface (CNI)<br />
* Relieves k8s from having to have a specific network configuration<br />
* It is activated by supplying <code>--network-plugin=cni, --cni-conf-dir, --cni-bin-dir</code> to kubelet<br />
** Typical configuration directory: <code>/etc/cni/net.d</code><br />
** Typical bin directory: <code>/opt/cni/bin</code><br />
* Allows for multiple backends to be used: linux-bridge, macvlan, ipvlan, Open vSwitch, network stacks<br />
<br />
;Kubernetes services<br />
<br />
* Services are crucial for service discovery and distributing traffic to Pods<br />
* Services act as simple internal load balancers with VIPs<br />
** No access controls<br />
** No traffic controls<br />
* IPtables magically route to virtual IPs<br />
* Internally, Services are used as inter-Pod service discovery<br />
** Kube-DNS publishes DNS record (i.e., <code>nginx.default.svc.cluster.local</code>)<br />
* Services can be exposed in three different ways:<br />
*# ClusterIP<br />
*# LoadBalancer<br />
*# NodePort<br />
<br />
; kube-proxy<br />
* Each k8s node in the cluster runs a kube-proxy<br />
* Two modes: userspace and iptables<br />
** iptables is much more performant (userspace should no longer be used<br />
* kube-proxy has the task of configuring iptables to expose each k8s service<br />
** iptables rules distributes traffic randomly across the endpoints<br />
<br />
===Network providers===<br />
<br />
In order for a CNI plugin to be considered a "[https://kubernetes.io/docs/concepts/cluster-administration/networking/ Network Provider]", it must provide (at the very least) the following:<br />
# All containers can communicate with all other containers without NAT<br />
# All nodes can communicate with all containers (and ''vice versa'') without NAT<br />
# The IP that a containers sees itself as is the same IP that others see it as<br />
<br />
==Linux namespaces==<br />
<br />
* Control group (cgroups)<br />
* Union File Systems<br />
<br />
==Kubernetes inbound node port requirements==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Protocol<br />
!Direction<br />
!Port range<br />
!Purpose<br />
!Used by<br />
!Notes<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Master node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 6443<sup>*</sup> || Kubernetes API server || All<br />
|-<br />
| TCP || Inbound || 2379-2380 || etcd server client API || kube-apiserver, etcd<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10251 || kube-scheduler || Self<br />
|-<br />
| TCP || Inbound || 10252 || kube-controller-manager || Self<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
|colspan="6" align="center" bgcolor="#eee" | '''Worker node(s)'''<br />
|-<br />
| TCP || Inbound || 4149 || Default cAdvisor port used to query container metrics || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 10250 || Kubelet API || Self, Control plane<br />
|-<br />
| TCP || Inbound || 10255 || Read-only Kubelet API || ''(optional)'' || Security risk<br />
|-<br />
| TCP || Inbound || 30000-32767 || NodePort Services<sup>**</sup> || All<br />
|}<br />
</div><br />
<br clear="all"/><br />
<sup>**</sup> Default port range for NodePort Services.<br />
<br />
Any port numbers marked with <sup>*</sup> are overridable, so you will need to ensure any custom ports you provide are also open.<br />
<br />
Although etcd ports are included in master nodes, you can also host your own etcd cluster externally or on custom ports.<br />
<br />
The pod network plugin you use (see below) may also require certain ports to be open. Since this differs with each pod network plugin, please see the documentation for the plugins about what port(s) those need.<br />
<br />
==API versions==<br />
<br />
Below is a table showing which value to use for the <code>apiVersion</code> key for a given k8s primitive (note: all values are for k8s 1.8.0, unless otherwise specified):<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-align="center" bgcolor="#1188ee"<br />
!Primitive<br />
!apiVersion<br />
|-<br />
| Pod || v1<br />
|-<br />
| Deployment || apps/v1beta2<br />
|-<br />
| Service || v1<br />
|-<br />
| Job || batch/v1<br />
|-<br />
| Ingress || extensions/v1beta1<br />
|-<br />
| CronJob || batch/v1beta1<br />
|-<br />
| ConfigMap || v1<br />
|-<br />
| DaemonSet || apps/v1<br />
|-<br />
| ReplicaSet || apps/v1beta2<br />
|-<br />
| NetworkPolicy || networking.k8s.io/v1<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
You can get a list of all of the API versions supported by your k8s install with:<br />
$ kubectl api-versions<br />
<br />
==Troubleshooting==<br />
<br />
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns<br />
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
* If your container has previously crashed, you can access the previous container’s crash log with:<br />
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}<br />
<br />
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}<br />
<br />
==Miscellaneous commands==<br />
<br />
* Simple workflow (not a best practice; use manifest files {YAML} instead):<br />
$ kubectl run nginx --image=nginx:1.10.0<br />
$ kubectl expose deployment nginx --port 80 --type LoadBalancer<br />
$ kubectl get services # <- wait until public IP is assigned<br />
$ kubectl scale deployment nginx --replicas 3<br />
<br />
* Create an Nginx deployment with three replicas without using YAML:<br />
$ kubectl run nginx --image=nginx --replicas=3<br />
<br />
* Take a node out of service for maintenance:<br />
$ kubectl cordon k8s.worker1.local<br />
$ kubectl drain k8s.worker1.local --ignore-daemonsets<br />
<br />
* Return a given node to a service after cordoning and "draining" it (e.g., after a maintenance):<br />
$ kubectl uncordon k8s.worker1.local<br />
<br />
* Get a list of nodes in a format useful for scripting:<br />
$ kubectl get nodes -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get nodes -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get nodes -o json | jq -crM '.items[].metadata.name'<br />
#~OR~ (if using an older version of `jq`)<br />
$ kubectl get nodes -o json | jq '.items[].metadata.name' | tr -d '"'<br />
<br />
* Label a list of nodes:<br />
<pre><br />
for node in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do<br />
kubectl label nodes ${node} instancetype=ondemand;<br />
kubectl label nodes ${node} "example.io/node-lifecycle"=od;<br />
done<br />
</pre><br />
<br />
* Delete a bunch of Pods in "Evicted" state:<br />
$ kubectl get pod -n develop | awk '/Evicted/{print $1}' | xargs kubectl delete pod -n develop<br />
#~OR~<br />
$ kubectl get po -a --all-namespaces -o json | \<br />
jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | <br />
"kubectl delete po \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c<br />
<br />
* Get a random node:<br />
$ NODES=($(kubectl get nodes -o json | jq -crM '.items[].metadata.name'))<br />
$ NUMNODES=${#NODES[@]}<br />
$ echo ${NODES[$[ $RANDOM % $NUMNODES ]]}<br />
<br />
* Get all recent events sorted by their timestamps:<br />
$ kubectl get events --sort-by='.metadata.creationTimestamp'<br />
<br />
* Get a list of all Pods in the default namespace sorted by Node:<br />
$ kubectl get po -o wide --sort-by=.spec.nodeName<br />
<br />
* Get the cluster IP for a service named "foo":<br />
$ kubectl get svc/foo -o jsonpath='{.spec.clusterIP}'<br />
<br />
* List all Services in a cluster and their node ports:<br />
$ kubectl get --all-namespaces svc -o json |\<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Print just the Pod names of those Pods with the label <code>app=nginx</code>:<br />
$ kubectl get --no-headers=true pods -l app=nginx -o custom-columns=:metadata.name<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o go-template --template '<nowiki>{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get --no-headers=true pods -l app=nginx -o name | awk -F "/" '{print $2}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o jsonpath='{.items[*].metadata.name}'<br />
#~OR~<br />
$ kubectl get pods -l app=nginx -o json | jq -crM '.items [] | .metadata.name'<br />
<br />
* Get a list of all container images used by the Pods in your default namespace:<br />
$ kubectl get pods -o go-template --template='<nowiki>{{range .items}}{{racontainers}}{{.image}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
#~OR~<br />
$ kubectl get pods -o go-template="<nowiki>{{range .items}}{{range .spec.containers}}{{.image}}|{{end}}{{end}}</nowiki>" | tr '|' '\n'<br />
<br />
* Get a list of Pods sorted by Node name:<br />
$ kubectl get po -o json | jq -r '.items | sort_by(.spec.nodeName)[] | [.spec.nodeName,.metadata.name] | @tsv'<br />
<br />
* List all Services in a cluster with their endpoints:<br />
$ kubectl get --all-namespaces svc -o json | \<br />
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))] | @csv'<br />
<br />
* Get status transitions of each Pod in the default namespace:<br />
$ export tpl='{range .items[*]}{"\n"}{@.metadata.name}{range @.status.conditions[*]}{"\t"}{@.type}={@.status}{end}{end}'<br />
$ kubectl get po -o jsonpath="${tpl}" && echo<br />
<br />
cheddar-cheese-d6d6587c7-4bgcz Initialized=True Ready=True PodScheduled=True<br />
echoserver-55f97d5bff-pdv65 Initialized=True Ready=True PodScheduled=True<br />
stilton-cheese-6d64cbc79-g7h4w Initialized=True Ready=True PodScheduled=True<br />
<br />
* Get a list of all Pods in status "Failed":<br />
$ kubectl get pods -o go-template='<nowiki>{{range .items}}{{if eq .status.phase "Failed"}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get all users in all namespaces:<br />
$ kubectl get rolebindings --all-namepsaces -o go-template \<br />
--template='<nowiki>{{range .items}}{{println}}{{.metadata.namespace}}={{range .subjects}}{{if eq .kind "User"}}{{.name}} {{end}}{{end}}{{end}}</nowiki>'<br />
<br />
* Get the memory limit assigned to a container in a given Pod:<br />
<pre><br />
$ kubectl get pod example-pod-name -n default \<br />
-o jsonpath="{.spec.containers[*].resources.limits}" <br />
</pre><br />
<br />
* Get a Bash prompt of your current context and namespace:<br />
<pre><br />
NORMAL="\[\033[00m\]"<br />
BLUE="\[\033[01;34m\]"<br />
RED="\[\e[1;31m\]"<br />
YELLOW="\[\e[1;33m\]"<br />
GREEN="\[\e[1;32m\]"<br />
PS1_WORKDIR="\w"<br />
PS1_HOSTNAME="\h"<br />
PS1_USER="\u"<br />
<br />
__kube_ps1()<br />
{<br />
CONTEXT=$(kubectl config current-context)<br />
NAMESPACE=$(kubectl config view -o jsonpath="{.contexts[?(@.name==\"${CONTEXT}\")].context.namespace}")<br />
if [ -z "$NAMESPACE"]; then<br />
NAMESPACE="default"<br />
fi<br />
if [ -n "$CONTEXT" ]; then<br />
case "$CONTEXT" in<br />
*prod*)<br />
echo "${RED}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*test*)<br />
echo "${YELLOW}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
*)<br />
echo "${GREEN}(⎈ ${CONTEXT} - ${NAMESPACE})"<br />
;;<br />
esac<br />
fi<br />
}<br />
<br />
export PROMPT_COMMAND='PS1="${GREEN}${PS1_USER}@${PS1_HOSTNAME}${NORMAL}:$(__kube_ps1)${BLUE}${PS1_WORKDIR}${NORMAL}\$ "'<br />
</pre><br />
<br />
===Client configuration===<br />
<br />
* Setup autocomplete in bash; bash-completion package should be installed first:<br />
$ source <(kubectl completion bash)<br />
<br />
* View Kubernetes config:<br />
$ kubectl config view<br />
<br />
* View specific config items by JSON path:<br />
$ kubectl config view -o jsonpath='{.users[?(@.name == "k8s")].user.password}'<br />
<br />
* Set credentials for foo.kuberntes.com:<br />
$ kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword<br />
<br />
===Viewing / finding resources===<br />
<br />
* List all services in the namespace:<br />
$ kubectl get services<br />
<br />
* List all pods in all namespaces in wide format:<br />
$ kubectl get pods -o wide --all-namespaces<br />
<br />
* List all pods in JSON (or YAML) format:<br />
$ kubectl get pods -o json<br />
<br />
* Describe resource details (node, pod, svc):<br />
$ kubectl describe nodes my-node<br />
<br />
* List services sorted by name:<br />
$ kubectl get services --sort-by=.metadata.name<br />
<br />
* List pods sorted by restart count:<br />
$ kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'<br />
<br />
* Rolling update pods for frontend-v1:<br />
$ kubectl rolling-update frontend-v1 -f frontend-v2.json<br />
<br />
* Scale a ReplicaSet named "foo" to 3:<br />
$ kubectl scale --replicas=3 rs/foo<br />
<br />
* Scale a resource specified in "foo.yaml" to 3:<br />
$ kubectl scale --replicas=3 -f foo.yaml<br />
<br />
* Execute a command in every pod / replica:<br />
$ for i in 0 1; do kubectl exec foo-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done<br />
<br />
* Get a list of ''all'' container IDs running in ''all'' Pods in ''all'' namespaces for a given Kubernetes cluster:<br />
<pre><br />
$ kubectl get pods --all-namespaces \<br />
-o jsonpath='{range .items[*]}{"pod: "}{.metadata.name}{"\n"}{range .status.containerStatuses[*]}{"\tname: "}{.containerID}{"\n\timage: "}{.image}{"\n"}{end}'<br />
<br />
# Example output:<br />
pod: cert-manager-848f547974-8m2k6<br />
name: containerd://358415173310a528a36ca2c19cdc3319f8fd96634c09957977767333b104d387<br />
image: quay.io/jetstack/cert-manager-controller:v1.5.3<br />
</pre><br />
<br />
===Manage resources===<br />
<br />
* Get documentation for pod or service:<br />
$ kubectl explain pods,svc<br />
<br />
* Create resource(s) like pods, services or DaemonSets:<br />
$ kubectl create -f ./my-manifest.yaml<br />
<br />
* Apply a configuration to a resource:<br />
$ kubectl apply -f ./my-manifest.yaml<br />
<br />
* Start a single instance of Nginx:<br />
$ kubectl run nginx --image=nginx<br />
<br />
* Create a secret with several keys:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
name: mysecret<br />
type: Opaque<br />
data:<br />
password: $(echo "s33msi4" | base64)<br />
username: $(echo "jane"| base64)<br />
EOF<br />
</pre><br />
<br />
* Delete a resource:<br />
$ kubectl delete -f ./my-manifest.yaml<br />
<br />
===Monitoring and logging===<br />
<br />
* Deploy Heapster from Github repository:<br />
$ kubectl create -f deploy/kube-config/standalone/<br />
<br />
* Show metrics for nodes:<br />
$ kubectl top node<br />
<br />
* Show metrics for pods:<br />
$ kubectl top pod<br />
<br />
* Show metrics for a given pod and its containers:<br />
$ kubectl top pod pod_name --containers<br />
<br />
* Dump pod logs (STDOUT):<br />
$ kubectl logs pod_name<br />
<br />
* Stream pod container logs (STDOUT, multi-container case):<br />
$ kubectl logs -f pod_name -c my-container<br />
<br />
<!-- TODO: https://gist.github.com/so0k/42313dbb3b547a0f51a547bb968696ba --><br />
<br />
===Run tcpdump on containers running in Pods===<br />
<br />
* Find which node/host/IP the Pod in question is running on and also get the container ID:<br />
<pre><br />
$ kubectl describe pod busybox | grep -E "^Node:|Container ID: "<br />
Node: worker2/10.39.32.122<br />
Container ID: docker://a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03<br />
<br />
#~OR~<br />
<br />
$ containerID=$(kubectl get po busybox -o jsonpath='{.status.containerStatuses[*].containerID}' | sed -e 's|docker://||g')<br />
$ hostIP=$(kubectl get po busybox -o jsonpath='{.status.hostIP}')<br />
</pre><br />
<br />
Log into the node/host running the Pod in question and then perform the following steps.<br />
<br />
* Get the virtual interface ID (note it will depend on which Container Network Interface you are using {e.g., veth, cali, etc.}):<br />
<pre><br />
$ docker exec a42cd31e62a905739b52d36b30eca5521fd250ac54280b43423027426b031a03 /bin/sh -c 'cat /sys/class/net/eth0/iflink'<br />
12<br />
<br />
# List all non-virtual interfaces:<br />
$ for iface in $(find /sys/class/net/ -type l ! -lname '*/devices/virtual/net/*' -printf '%f '); do echo "$iface is not virtual"; done<br />
ens192 is not virtual<br />
<br />
# Check if we are using veth or cali or something else:<br />
$ ls -1 /sys/class/net/ | awk '!/docker|lo|ens/{print substr($0,0,4);exit}'<br />
cali<br />
<br />
$ for i in /sys/class/net/veth*/ifindex; do grep -l 12 $i; done<br />
#~OR~<br />
$ for i in /sys/class/net/cali*/ifindex; do grep -l 12 $i; done<br />
/sys/class/net/cali12d4a061371/ifindex<br />
#~OR~<br />
echo $(find /sys/class/net/ -type l -lname '*/devices/virtual/net/*' -exec grep -l 12 {}/ifindex \;) | awk -F'/' '{print $5}'<br />
cali12d4a061371<br />
#~OR~<br />
$ ip link | grep ^12<br />
12: cali12d4a061371@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default<br />
#~OR~<br />
$ ip link | awk '/^12/{print $2}' | awk -F'@' '{print $1}'<br />
cali12d4a061371<br />
</pre><br />
<br />
* Now run [[tcpdump]] on this virtual interface (note: make sure you are running tcpdump on the ''same'' host as the Pod is running on):<br />
$ sudo tcpdump -i cali12d4a061371<br />
<br />
; Self-signed certificates<br />
<br />
If you are using the latest version of <code>kubectl</code> and are running it against a k8s cluster built with a self-signed cert, you can get around any "x509" errors with:<br />
$ export GODEBUG=x509ignoreCN=0<br />
<br />
===API resources===<br />
<br />
* Get a list of all the resource types and their latest supported version:<br />
<pre><br />
$ time for kind in $(kubectl api-resources | tail +2 | awk '{print $1}'); do<br />
kubectl explain ${kind};<br />
done | grep -E "^KIND:|^VERSION:"<br />
<br />
KIND: Binding<br />
VERSION: v1<br />
KIND: ComponentStatus<br />
VERSION: v1<br />
KIND: ConfigMap<br />
VERSION: v1<br />
...<br />
<br />
real 1m20.014s<br />
user 0m52.732s<br />
sys 0m17.751s<br />
</pre><br />
<br />
* Note: if you just want a version for a single/given kind:<br />
<pre><br />
$ kubectl explain deploy | head -2<br />
KIND: Deployment<br />
VERSION: apps/v1<br />
</pre><br />
<br />
===kubectl-neat===<br />
<br />
: See: https://github.com/itaysk/kubectl-neat<br />
: See: [[jq]]<br />
<br />
* To easily copy a certificate secret from one namespace to another namespace run:<br />
<pre><br />
$ SOURCE_NAMESPACE=<update-me><br />
$ DESTINATION_NAMESPACE=<update-me><br />
$ kubectl -n ${SOURCE_NAMESPACE} get secret kafka-client-credentials -o json |\<br />
kubectl neat |\<br />
jq 'del(.metadata["namespace"])' |\<br />
kubectl apply -n ${DESTINATION_NAMESPACE} -f -<br />
</pre><br />
<br />
===Get CPU/memory for each node===<br />
<br />
<pre><br />
for node in $(kubectl get nodes -o=jsonpath='{.items[*].metadata.name}'); do<br />
echo "NODE: ${node}"; kubectl describe node ${node} | grep -E '^ cpu |^ memory ';<br />
done<br />
</pre><br />
<br />
===Get vCPU capacity===<br />
<br />
<pre><br />
$ kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"} \<br />
{.status.capacity.cpu}{\"\n\"}{end}"<br />
</pre><br />
<br />
==Miscellaneous examples==<br />
<br />
* Create a Namespace:<br />
<pre><br />
kind: Namespace<br />
apiVersion: v1<br />
metadata:<br />
name: my-namespace<br />
</pre><br />
<br />
; Testing the load balancing capabilities of a Service<br />
<br />
* Create a Deployment with two replicas of Nginx (i.e., 2 x Pods with identical containers, configuration, etc.):<br />
<pre><br />
$ cat << EOF >nginx-deploy.yml<br />
kind: Deployment<br />
apiVersion: apps/v1<br />
metadata:<br />
name: nginx-deploy<br />
spec:<br />
replicas: 2<br />
strategy:<br />
rollingUpdate:<br />
maxSurge: 1<br />
maxUnavailable: 0<br />
type: RollingUpdate<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.7.9<br />
ports:<br />
- containerPort: 80<br />
EOF<br />
</pre><br />
$ kubectl create --validate -f nginx-deploy.yml<br />
$ kubectl get deploy<br />
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br />
nginx-deploy 2 2 2 2 1h<br />
$ kubectl get po<br />
NAME READY STATUS RESTARTS AGE<br />
nginx-deploy-8d68fb6cc-bspt8 1/1 Running 1 1h<br />
nginx-deploy-8d68fb6cc-qdvhg 1/1 Running 1 1h<br />
<br />
* Create a Service:<br />
<pre><br />
$ cat <<EOF | kubectl create -f -<br />
kind: Service<br />
apiVersion: v1<br />
metadata:<br />
name: nginx-svc<br />
spec:<br />
ports:<br />
- port: 8080<br />
targetPort: 80<br />
protocol: TCP<br />
selector:<br />
app: nginx<br />
EOF<br />
<br />
$ kubectl get svc/nginx-svc<br />
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br />
nginx-svc ClusterIP 10.101.133.100 <none> 8080/TCP 1h<br />
</pre><br />
<br />
* Overwrite the default index.html file (note: This is ''not'' persistent. The original default index.html file will be restored if the Pod fails and the Deployment brings up a new Pod and/or if you modify your Deployment {e.g., upgrade Nginx}. This is just for demonstration purposes):<br />
$ kubectl exec -it nginx-8d68fb6cc-bspt8 -- sh -c 'echo "pod-01" > /usr/share/nginx/html/index.html'<br />
$ kubectl exec -it nginx-8d68fb6cc-qdvhg -- sh -c 'echo "pod-02" > /usr/share/nginx/html/index.html'<br />
<br />
* Get the HTTP status code and server value from the header of a request to the Service endpoint:<br />
$ curl -Is 10.101.133.100:8080 | grep -E '^HTTP|Server'<br />
HTTP/1.1 200 OK<br />
Server: nginx/1.7.9 # <- This is the version of Nginx we defined in the Deployment above<br />
<br />
* Perform a GET request on the Service endpoint (ClusterIP+Port):<br />
<pre><br />
$ for i in $(seq 1 10); do curl -s 10.101.133.100:8080; done<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-01<br />
pod-02<br />
pod-02<br />
pod-02<br />
pod-02<br />
</pre><br />
Sometimes <code>pod-01</code> responded; sometimes <code>pod-02</code> responded.<br />
<br />
* Perform a GET on the Service endpoint 10,000 times and sum up which Pod responded for each request:<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
5018 pod-01 # <- number of times pod-01 responded to the request<br />
4982 pod-02 # <- number of times pod-02 responded to the request<br />
<br />
real 1m0.639s<br />
user 0m29.808s<br />
sys 0m11.692s<br />
</pre><br />
<br />
$ awk 'BEGIN{print 5018/(5018+4982);}'<br />
0.5018<br />
$ awk 'BEGIN{print 4982/(5018+4982);}'<br />
0.4982<br />
<br />
So, our Service is "load balancing" our two Nginx Pods in a roughly 50/50 fashion.<br />
<br />
In order to double-check that the Service is randomly selecting a Pod to serve the GET request, let's scale our Deployment from 2 to 3 replicas:<br />
$ kubectl scale deploy/nginx-deploy --replicas=3<br />
<br />
<pre><br />
$ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c<br />
3392 pod-01<br />
3335 pod-02<br />
3273 pod-03<br />
<br />
real 0m59.537s<br />
user 0m25.932s<br />
sys 0m9.656s<br />
</pre><br />
$ awk 'BEGIN{print 3392/(3392+3335+3273);}'<br />
0.3392<br />
$ awk 'BEGIN{print 3335/(3392+3335+3273);}'<br />
0.3335<br />
$ awk 'BEGIN{print 3273/(3392+3335+3273);}'<br />
0.3273<br />
<br />
Sure enough. Each of the 3 Pods is serving the GET request roughly 33% of the time.<br />
<br />
==Example YAML files==<br />
<br />
* Basic Pod using busybox:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: busybox<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- sleep<br />
- "3600"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod using busybox, which also prints out environment variables (including the ones defined in the YAML):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: env-dump<br />
spec:<br />
containers:<br />
- name: busybox<br />
image: busybox<br />
command:<br />
- env<br />
env:<br />
- name: USERNAME<br />
value: "Christoph"<br />
- name: PASSWORD<br />
value: "mypassword"<br />
</pre><br />
$ kubectl logs env-dump<br />
...<br />
PASSWORD=mypassword<br />
USERNAME=Christoph<br />
...<br />
<br />
* Basic Pod using alpine:<br />
<pre><br />
kind: Pod<br />
apiVersion: v1<br />
metadata:<br />
name: alpine<br />
namespace: default<br />
spec:<br />
containers:<br />
- name: alpine<br />
image: alpine<br />
command:<br />
- /bin/sh<br />
- "-c"<br />
- "sleep 60m"<br />
imagePullPolicy: IfNotPresent<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Basic Pod running Nginx:<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: nginx-pod<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx<br />
restartPolicy: Always<br />
</pre><br />
<br />
* Create a Job that calculates pi up to 2000 decimal places:<br />
<pre><br />
apiVersion: batch/v1<br />
kind: Job<br />
metadata:<br />
name: pi<br />
spec:<br />
template:<br />
spec:<br />
containers:<br />
- name: pi<br />
image: perl<br />
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]<br />
restartPolicy: Never<br />
backoffLimit: 4<br />
</pre><br />
<br />
* Create a Deployment with two replicas of Nginx running:<br />
<pre><br />
apiVersion: apps/v1beta2<br />
kind: Deployment<br />
metadata:<br />
name: nginx-deployment<br />
spec:<br />
selector:<br />
matchLabels:<br />
app: nginx<br />
replicas: 2 <br />
template:<br />
metadata:<br />
labels:<br />
app: nginx<br />
spec:<br />
containers:<br />
- name: nginx<br />
image: nginx:1.9.1<br />
ports:<br />
- containerPort: 80<br />
</pre><br />
<br />
* Create a basic Persistent Volume, which uses NFS:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
name: mypv<br />
spec:<br />
capacity:<br />
storage: 1Gi<br />
volumeMode: Filesystem<br />
accessModes:<br />
- ReadWriteMany<br />
persistentVolumeReclaimPolicy: Recycle<br />
nfs:<br />
path: /var/nfs/general<br />
server: 172.31.119.58<br />
readOnly: false<br />
</pre><br />
<br />
* Create a Persistent Volume Claim against the above PV:<br />
<pre><br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
name: nfs-pvc<br />
spec:<br />
accessModes:<br />
- ReadWriteMany<br />
resources:<br />
requests:<br />
storage: 1Gi<br />
</pre><br />
<br />
* Create a Pod using a customer scheduler (i.e., not the default one):<br />
<pre><br />
apiVersion: v1<br />
kind: Pod<br />
metadata:<br />
name: my-custom-scheduler<br />
annotations:<br />
scheduledBy: custom-scheduler<br />
spec:<br />
schedulerName: custom-scheduler<br />
containers:<br />
- name: pod-container<br />
image: k8s.gcr.io/pause:2.0<br />
</pre><br />
<br />
==Install k8s cluster manually in the Cloud==<br />
<br />
''Note: For this example, I will be using AWS and I will assume you already have 3 x EC2 instances running CentOS 7 in your AWS account. I will install Kubernetes 1.10.x.''<br />
<br />
* Disable services not supported (yet) by Kubernetes:<br />
$ sudo setenforce 0 # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config<br />
<br />
$ sudo systemctl stop firewalld<br />
$ sudo systemctl mask firewalld<br />
$ sudo yum install -y iptables-services<br />
<br />
* Disable swap:<br />
$ sudo swapoff -a # NOTE: Not persistent!<br />
#~OR~ Make persistent:<br />
$ sudo vi /etc/fstab # comment out swap line<br />
$ sudo mount -a<br />
<br />
* Make sure routed traffic does not bypass iptables:<br />
$ cat << EOF > /etc/sysctl.d/k8s.conf<br />
net.bridge.bridge-nf-call-ip6tables = 1<br />
net.bridge.bridge-nf-call-iptables = 1<br />
EOF<br />
$ sudo sysctl --system<br />
<br />
* Install <code>kubelet</code>, <code>kubeadm</code>, and <code>kubectl</code> on '''''all''''' nodes in your cluster (both Master and Worker nodes):<br />
<pre><br />
$ cat << EOF > /etc/yum.repos.d/kubernetes.repo<br />
[kubernetes]<br />
name=Kubernetes<br />
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch<br />
enabled=1<br />
gpgcheck=1<br />
repo_gpgcheck=1<br />
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg<br />
EOF<br />
</pre><br />
<br />
$ sudo yum install -y kubelet kubeadm kubectl<br />
$ sudo systemctl enable kubelet && sudo systemctl start kubelet<br />
<br />
* Configure cgroup driver used by kubelet on '''''all''''' nodes (both Master and Worker nodes):<br />
<br />
Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config:<br />
<br />
$ docker info | grep -i cgroup<br />
$ grep -i cgroup /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
If the Docker cgroup driver and the kubelet config do not match, change the kubelet config to match the Docker cgroup driver. The flag you need to change is <code>--cgroup-driver</code>. If it is already set, you can update like so:<br />
<br />
$ sudo sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf<br />
<br />
Otherwise, you will need to open the systemd file and add the flag to an existing environment line.<br />
<br />
Then restart kubelet:<br />
<br />
$ sudo systemctl daemon-reload<br />
$ sudo systemctl restart kubelet<br />
<br />
* Run <code>kubeadm</code> on Master node:<br />
<br />
K8s requires a pod network to function. We are going to use Flannel, so we need to pass in a flag to the deployment script so k8s knows how to configure itself:<br />
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16<br />
<br />
Note: This command might take a fair amount of time to complete.<br />
<br />
Once it has completed, make note of the "<code>join</code>" command output by <code>kubeadm init</code> that looks something like the following ('''DO NOT RUN THE FOLLOWING COMMAND YET!'''):<br />
# kubeadm join --token --discovery-token-ca-cert-hash sha256:<br />
<br />
You will run that command on the other non-master nodes (aka the "Worker Nodes") to allow them to join the cluster. However, '''do not''' run that command on the worker nodes until you have completed all of the following steps.<br />
<br />
* Create a directory:<br />
$ mkdir -p $HOME/.kube<br />
<br />
* Copy the configuration files to a location usable by the local user:<br />
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config <br />
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config<br />
<br />
* In order for your pods to communicate with one another, you will need to install pod networking. We are going to use Flannel for our Container Network Interface (CNI) because it is easy to install and reliable. <br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</nowiki><br />
$ kubectl apply -f <nowiki>https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml</nowiki><br />
<br />
* Make sure everything is coming up properly:<br />
$ kubectl get pods --all-namespaces --watch<br />
Once the <code>kube-dns-xxxx</code> containers are up (i.e., in Status "Running"), your cluster is ready to accept worker nodes.<br />
<br />
* On each of the Worker nodes, run the <code>sudo kubeadm join ...</code> command that <code>kubeadm init</code> created for you (see above).<br />
<br />
* On the Master Node, run the following command:<br />
$ kubectl get nodes --watch<br />
Once the Status of the Worker Nodes returns "Ready", your k8s cluster is ready to use.<br />
<br />
* Example output of successful Kubernetes cluster:<br />
<pre><br />
$ kubectl get nodes<br />
NAME STATUS ROLES AGE VERSION<br />
k8s-01 Ready master 13m v1.10.1<br />
k8s-02 Ready <none> 12m v1.10.1<br />
k8s-03 Ready <none> 12m v1.10.1<br />
</pre><br />
<br />
That's it! You are now ready to start deploying Pods, Deployments, Services, etc. in your Kubernetes cluster!<br />
<br />
==Bash completion==<br />
''Note: The following only works on newer versions. I have tested that this works on version 1.9.1.''<br />
<br />
Add the following line to your <code>~/.bashrc</code> file:<br />
source <(kubectl completion bash)<br />
<br />
==Kubectl plugins==<br />
<br />
SEE: [https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/ Extend kubectl with plugins] for details.<br />
<br />
: FEATURE STATE: Kubernetes v1.11 (alpha)<br />
: FEATURE STATE: Kubernetes v1.15 (stable)<br />
<br />
This section shows you how to install and write extensions for <code>kubectl</code>. Usually called "plugins" or "binary extensions", this feature allows you to extend the default set of commands available in <code>kubectl</code> by adding new sub-commands to perform new tasks and extend the set of features available in the main distribution of <code>kubectl</code>.<br />
<br />
Get code [https://github.com/kubernetes/kubernetes/tree/master/pkg/kubectl/plugins/examples from here].<br />
<br />
<pre><br />
.kube/<br />
└── plugins<br />
└── aging<br />
├── aging.rb<br />
└── plugin.yaml<br />
</pre><br />
<br />
$ chmod 0700 .kube/plugins/aging/aging.rb<br />
<br />
* See options:<br />
<pre><br />
$ kubectl plugin aging --help<br />
Aging shows pods from the current namespace by age.<br />
<br />
Usage:<br />
kubectl plugin aging [flags] [options]<br />
</pre><br />
<br />
* Usage:<br />
<pre><br />
$ kubectl plugin aging<br />
The Magnificent Aging Plugin.<br />
<br />
nginx-deployment-67594d6bf6-5t8m9: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-6kw9j: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
<br />
nginx-deployment-67594d6bf6-d8dwt: ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 6 hours and 8 minutes<br />
</pre><br />
<br />
==Local Kubernetes==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="6" bgcolor="#EFEFEF" | '''Local Kubernetes Comparisons'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Feature<br />
!kind<br />
!k3d<br />
!minikube<br />
!Docker Desktop<br />
!Rancher Desktop<br />
|- <br />
| Free || yes || yes || yes || Personal Small Business* || yes<br />
|--bgcolor="#eeeeee"<br />
| Install || easy || easy || easy || easy || medium (you may encounter odd scenarios)<br />
|-<br />
| Ease of Use || medium || medium || medium || easy || easy<br />
|--bgcolor="#eeeeee"<br />
| Stability || stable || stable || stable || stable || stable<br />
|-<br />
| Cross-platform || yes || yes || yes || yes || yes<br />
|--bgcolor="#eeeeee"<br />
| CI Usage || yes || yes || yes || no || no<br />
|-<br />
| Multiple clusters || yes || yes || yes || no || no<br />
|--bgcolor="#eeeeee"<br />
| Podman support || yes || yes || yes || no || no<br />
|-<br />
| Host volumes mount support || yes || yes || yes (with some performance limitations) || yes || yes (only pre-defined paths)<br />
|--bgcolor="#eeeeee"<br />
| Kubernetes service port-forwarding/mapping || yes || yes || yes || yes || yes<br />
|-<br />
| Pull-through Docker mirror/proxy || yes || yes || no || yes (can reference locally available images) || yes (can reference locally available images)<br />
|--bgcolor="#eeeeee"<br />
| Custom CNI || yes (ex: calico) || yes (ex: flannel) || yes (ex: calico) || no || no<br />
|-<br />
| Features Gates || yes || yes || yes || yes (but not natively; requires hacky setup) || yes (but not natively; requires hacky setup)<br />
|}<br />
</div><br />
<br clear="all"/><br />
<br />
[https://bmiguel-teixeira.medium.com/local-kubernetes-the-one-above-all-3aedbeb5f3f6 Source]<br />
<br />
==See also==<br />
* [[Kubernetes/the-hard-way|Kubernetes the Hard Way]]<br />
* [[Kubernetes/GKE|Google Kubernetes Engine]] (GKE)<br />
* [[Kubernetes/AWS|Kubernetes on AWS]] (EKS)<br />
* [[Kubeless]]<br />
* [[Helm]]<br />
<br />
==External links==<br />
* [http://kubernetes.io/ Official website]<br />
* [https://github.com/kubernetes/kubernetes Kubernetes code] &mdash; via GitHub<br />
===Playgrounds===<br />
* [https://www.katacoda.com/courses/kubernetes/playground Kubernetes Playground]<br />
* [https://labs.play-with-k8s.com Play with k8s]<br />
===Tools===<br />
* [https://github.com/kubernetes/minikube minikube] &mdash; Run Kubernetes locally<br />
* [https://kind.sigs.k8s.io/ kind] &mdash; '''K'''ubernetes '''IN''' '''D'''ocker (local clusters for testing Kubernetes)<br />
* [https://github.com/kubernetes/kops kops] &mdash; Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management<br />
* [https://kubernetes-incubator.github.io/kube-aws kube-aws] &mdash; a command-line tool to create/update/destroy Kubernetes clusters on AWS<br />
* [https://github.com/kubernetes-incubator/kubespray kubespray] &mdash; Deploy a production ready kubernetes cluster<br />
* [https://rook.io/ Rook.io] &mdash; File, Block, and Object Storage Services for your Cloud-Native Environments<br />
===Resources===<br />
* [https://kubernetes.io/docs/getting-started-guides/scratch/ Creating a Custom Cluster from Scratch]<br />
* [https://github.com/kelseyhightower/kubernetes-the-hard-way Kubernetes The Hard Way]<br />
* [http://k8sport.org/ K8sPort]<br />
* [https://k8s.af/ Kubernetes Failure Stories]<br />
<br />
===Training===<br />
* [https://kubernetes.io/training/ Official Kubernetes Training Website]<br />
** Kubernetes and Cloud Native Associate (KCNA)<br />
** Certified Kubernetes Application Developer (CKAD)<br />
** Certified Kubernetes Administrator (CKA)<br />
** Certified Kubernetes Security Specialist (CKS) [note: Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS.]<br />
* [https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals Kubernetes Fundamentals] (LFS258)<br />
** ''[https://www.cncf.io/certification/expert/ Certified Kubernetes Administrator]'' (PKA) certification.<br />
* [https://killer.sh/ CKS / CKA / CKAD Simulator]<br />
* [https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/ 11 Ways (Not) to Get Hacked]<br />
<br />
===Blog posts===<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727 Understanding kubernetes networking: pods] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82 Understanding kubernetes networking: services] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/understanding-kubernetes-networking-ingress-1bc341c84078 Understanding kubernetes networking: ingress] &mdash; by Mark Betz, 2017-12-17<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b Kubernetes ConfigMaps and Secrets - Part 1] &mdash; by Sandeep Dinesh, 2017-07-13<br />
* [https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-part-2-3dc37111f0dc Kubernetes ConfigMaps and Secrets - Part 2] &mdash; by Sandeep Dinesh, 2017-08-08<br />
* [https://abhishek-tiwari.com/10-open-source-tools-for-highly-effective-kubernetes-sre-and-ops-teams/ 10 open-source Kubernetes tools for highly effective SRE and Ops Teams]<br />
* [https://www.ianlewis.org/en/tag/kubernetes Series of blog posts about k8s] &mdash; by Ian Lewis<br />
* [https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?] &mdash; by Sandeep Dinesh, 2018-03-11<br />
<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:DevOps]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Analogue_synthesizers&diff=8246Analogue synthesizers2022-10-31T20:37:13Z<p>Christoph: /* Glossary */</p>
<hr />
<div>An '''analogue''' (or '''analog''') '''synthesizer''' is a synthesizer that uses analogue circuits and analogue signals to generate sound electronically.<br />
<br />
==Electronic oscillators==<br />
* Voltage-controlled oscillator (VCO)<br />
* Low-frequency oscillation (LFO)<br />
* Numerically-controlled oscillator (NCO)<br />
* Variable-frequency oscillator (VFO)<br />
* Variable-gain amplifier<br />
* Voltage-controlled filter (VCF)<br />
* Modular synthesizer<br />
<br />
==CV/gate==<br />
<br />
; Gate: high voltage over time<br />
; Trigger: short voltage spike<br />
; Pitch CV: variable CV to control 1 volt/octave oscillators<br />
<br />
==Glossary==<br />
* [[:Wikipedia:CV/gate|CV/gate]] &mdash; (an abbreviation of control voltage/gate) is an analogue method of controlling synthesizers, drum machines, and similar equipment with external sequencers. The control voltage typically controls pitch and the gate signal controls note on-off.<br />
* [[:Wikipedia:Digital audio workstation|Digital audio workstation]] (DAW) is an electronic device or application software used for recording, editing and producing audio files.<br />
<br />
[[Category:Hobbies]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Analogue_synthesizers&diff=8245Analogue synthesizers2022-10-01T20:44:43Z<p>Christoph: </p>
<hr />
<div>An '''analogue''' (or '''analog''') '''synthesizer''' is a synthesizer that uses analogue circuits and analogue signals to generate sound electronically.<br />
<br />
==Electronic oscillators==<br />
* Voltage-controlled oscillator (VCO)<br />
* Low-frequency oscillation (LFO)<br />
* Numerically-controlled oscillator (NCO)<br />
* Variable-frequency oscillator (VFO)<br />
* Variable-gain amplifier<br />
* Voltage-controlled filter (VCF)<br />
* Modular synthesizer<br />
<br />
==CV/gate==<br />
<br />
; Gate: high voltage over time<br />
; Trigger: short voltage spike<br />
; Pitch CV: variable CV to control 1 volt/octave oscillators<br />
<br />
==Glossary==<br />
* [[:Wikipedia:CV/gate|CV/gate]] &mdash; (an abbreviation of control voltage/gate) is an analogue method of controlling synthesizers, drum machines, and similar equipment with external sequencers. The control voltage typically controls pitch and the gate signal controls note on-off.<br />
<br />
[[Category:Hobbies]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=GitHub_Actions&diff=8244GitHub Actions2022-09-26T23:08:15Z<p>Christoph: </p>
<hr />
<div>'''GitHub Actions''' is a service provided by GitHub that allows building continuous integration and continuous deployment (CI/CD) pipelines for testing, releasing, and deploying software without the use of third-party websites/platforms.<br />
<br />
==Introduction==<br />
<br />
GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository or deploy merged pull requests to production.<br />
<br />
There are different components of GitHub Actions:<br />
<br />
;Workflows : A workflow is a configurable automated process that will run one or more jobs.<br />
;Events : An event is a specific activity in a repository that triggers a workflow run. For example, activity can originate from GitHub when someone creates a pull request, opens an issue, or pushes a commit to a repository.<br />
;Jobs : A job is a set of steps in a workflow<br />
;Actions : Performs frequently repeated task. An action can for example pull your git repository from GitHub,<br />
;Runners : A runner is a server that runs your workflows when they’re triggered.<br />
<br />
==Examples==<br />
<br />
===Basic===<br />
<br />
<pre><br />
$ cat .github/workflows/basic.yaml<br />
<br />
name: Shell Commands <br />
<br />
on: [push]<br />
<br />
jobs:<br />
run-shell-commands:<br />
runs-on: ubuntu-latest<br />
steps: <br />
- name: echo a string<br />
run: echo "Hello, World!"<br />
- name: multiline script <br />
run: |<br />
node -v <br />
npm -v<br />
- name: python command <br />
run: |<br />
import platform <br />
print<br />
(platform.processor())<br />
shell: python<br />
</pre><br />
<br />
* Using the GitHub CLI:<br />
<pre><br />
$ gh workflow view<br />
? Select a workflow Shell Commands (simple.yaml)<br />
Shell Commands - simple.yaml<br />
ID: 13613098<br />
<br />
Total runs 1<br />
Recent runs<br />
✓ initial commit of .github/workflows/simple.yaml Shell Commands master push 1280462255<br />
<br />
To see more runs for this workflow, try: gh run list --workflow simple.yaml<br />
To see the YAML for this workflow, try: gh workflow view simple.yaml --yaml<br />
<br />
$ gh workflow view simple.yaml --yaml<br />
Shell Commands - simple.yaml<br />
ID: 13613098<br />
<br />
name: Shell Commands <br />
<br />
on: [push]<br />
<br />
jobs:<br />
run-shell-command:<br />
runs-on: ubuntu-latest<br />
steps: <br />
- name: echo a string<br />
run: echo "Hello World"<br />
- name: multiline script <br />
run: |<br />
node -v <br />
npm -v <br />
- name: python Command <br />
run: |<br />
import platform <br />
print<br />
(platform.processor()) <br />
shell: python<br />
<br />
<br />
$ gh run list --workflow simple.yaml<br />
STATUS NAME WORKFLOW BRANCH EVENT ID ELAPSED AGE<br />
✓ initial commit of .github/workflows/simple.yaml Shell Commands master push 1280462255 15s 15d<br />
<br />
For details on a run, try: gh run view <run-id><br />
<br />
$ gh run view 1280462255<br />
<br />
✓ master Shell Commands · 1280462255<br />
Triggered via push about 16 days ago<br />
<br />
JOBS<br />
✓ run-shell-command in 1s (ID 3726651254)<br />
<br />
For more information about a job, try: gh run view --job=<job-id><br />
View this run on GitHub: https://github.com/christophchamp/github-actions/actions/runs/1280462255<br />
</pre><br />
<br />
==External links==<br />
* [https://docs.github.com/en/actions Official GitHub Actions documentation]<br />
* [https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions Understanding GitHub Actions]<br />
* [https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners About GitHub-hosted runners]<br />
* [https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/enabling-debug-logging Enabling debug logging]<br />
* [https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-README.md List of packages installed on Ubuntu by default]<br />
* [https://github.com/christophchamp/github-actions Christoph Champ's GitHub Actions demos]<br />
* [https://marketplace.visualstudio.com/items?itemName=me-dutour-mathieu.vscode-github-actions GitHub Actions YAML Extension] &mdash; for VS Code<br />
<br />
===GitHub CLI===<br />
* [https://cli.github.com/manual/ GitHub CLI Online Manual]<br />
* [https://github.blog/2021-04-15-work-with-github-actions-in-your-terminal-with-github-cli/ Work with GitHub Actions in your terminal with GitHub CLI]<br />
<br />
[[Category:DevOps]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Books&diff=8243Category:Books2022-09-26T02:40:40Z<p>Christoph: /* Titles (completed) */</p>
<hr />
<div>My love of books runs deep. I try to read for at least an hour every day (books unrelated to my studies). This category will contain a list of the books I have read or [[Summer Reading List|am reading]].<br />
<br />
==Titles (completed)==<br />
''Note: These are a list of books I have read in their entirety. This is nowhere near a complete list and the following list is in no particular order.''<br />
<br />
#'''''From Dawn to Decadence: 1500 to the Present: 500 Years of Western Cultural Life''''' &mdash; by Jacques Barzun<br />
#'''''The Invention of Science: The Scientific Revolution from 1500 to 1750''''' &mdash; by David Wootton<br />
#'''''Predictably Irrational: The Hidden Forces That Shape Our Decisions''''' &mdash; by Dan Ariely (2008)<br />
#'''''The Tyranny of Experts: Economists, Dictators, and the Forgotten Rights of the Poor''''' &mdash; by William Easterly<br />
#'''''The Origins of Political Order: From Prehuman Times to the French Revolution''''' &mdash; by Francis Fukuyama<br />
#'''''Political Order and Political Decay: From the Industrial Revolution to the Globalization of Democracy''''' &mdash; by Francis Fukuyama<br />
#'''''Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World''''' &mdash; by Bruce Schneier<br />
#'''''Superintelligence: Paths, Dangers, Strategies''''' &mdash; by Nick Bostrom<br />
#'''''Smashing Physics''''' &mdash; by Jon Butterworth<br />
#'''''The History of the Ancient World: From the Earliest Accounts to the Fall of Rome''''' &mdash; by Susan Wise Bauer<br />
#'''''The History of the Medieval World: From the Conversion of Constantine to the First Crusade''''' &mdash; by Susan Wise Bauer<br />
#'''''The History of the Renaissance World: From the Rediscovery of Aristotle to the Conquest of Constantinople''''' &mdash; by Susan Wise Bauer<br />
#'''''The Well Educated Mind: A Guide to the Classical Education You Never Had''''' &mdash; by Susan Wise Bauer<br />
#'''''The Story of Western Science: From the Writings of Aristotle to the Big Bang Theory''''' &mdash; by Susan Wise Bauer (2015)<br />
#'''''Countdown to Zero Day''''' &mdash; by Kim Zetter<br />
#'''''The Revenge of Geography''''' &mdash; by Robert D. Kaplan<br />
#'''''The Master of Disguise''''' &mdash; by Antonio J. Mendez<br />
#'''''To Explain the World: The Discovery of Modern Science''''' &mdash; by Steven Weinberg (2015)<br />
#'''''The Fall of the Roman Empire''''' &mdash; by Peter Heather<br />
#'''''The Shadow Factory''''' &mdash; by James Bamford<br />
#'''''Operation Shakespeare''''' &mdash; by John Shiffman<br />
#'''''No Place to Hide''''' &mdash; by Glenn Greenwald<br />
#'''''Neanderthal Man: In Search of Lost Genomes''''' &mdash; by Svante Pääbo (2014)<br />
#'''''Constantine the Emperor''''' &mdash; by David Potter<br />
#'''''A Troublesome Inheritance''''' &mdash; by Nicholas Wade<br />
#'''''The Selfish Gene''''' &mdash; by Richard Dawkins<br />
#'''''The 4-Hour Workweek: Escape 9-5, Live Anywhere, and Join the New Rich''''' &mdash; by [http://www.fourhourworkweek.com/blog/about/ Timothy Ferriss] (2007)<br />
#'''''Hackers: Heroes of the Computer Revolution''''' &mdash; by Steven Levy<br />
#'''''Wealth, Poverty, and Politics: An International Perspective''''' &mdash; Thomas Sowell<br />
#'''''The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win''''' &mdash; by Gene Kim, Kevin Behr, George Spafford<br />
#'''''Paper: Paging Through History''''' &mdash; by Mark Kurlansky<br />
#'''''Salt: A World History''''' &mdash; by Mark Kurlansky<br />
#'''''Guns, Germs, and Steel: The Fates of Human Societies''''' &mdash; by Jared Diamond (1997)<br />
#'''''Collapse: How Societies Choose to Fail or Succeed''''' &mdash; by Jared Diamond (2005)<br />
#'''''The Better Angels of Our Nature: Why Violence Has Declined''''' &mdash; by Steven Pinker<br />
#'''''How to Win Friends & Influence People''''' &mdash; by Dale Carnegie (1936)<br />
#'''''[[The True Believer: Thoughts on the Nature of Mass Movements]]''''' &mdash; Eric Hoffer (1951)<br />
#'''''An Economic History of the World since 1400''''' &mdash; by Professor Donald J. Harreld<br />
#'''''The End of the Cold War 1985-1991''''' &mdash; by Robert Service<br />
#'''''Iron Kingdom: The Rise and Downfall of Prussia, 1600-1947''''' &mdash; by Christopher Clark<br />
#'''''[https://www.goodreads.com/book/show/12158480-why-nations-fail Why Nations Fail: The Origins of Power, Prosperity, and Poverty]''''' &mdash; by Daron Acemoğlu and James A. Robinson (2012)<br />
#'''''The Six Wives of Henry VIII''''' &mdash; by Alison Weir (1991)<br />
#'''''The Demon-Haunted World: Science as a Candle in the Dark''''' &mdash; by Carl Sagan (1996)<br />
#'''''Dark Territory: The Secret History of Cyber War''''' &mdash; by Fred Kaplan (2016)<br />
#'''''A Brief History of Britain 1066-1485''''' &mdash; by Nicholas Vincent (2012)<br />
#'''''The History of Science: 1700-1900''''' &mdash; by Professor Frederick Gregory (2003)<br />
#'''''Heart of Europe: A History of the Holy Roman Empire''''' &mdash; by Peter H. Wilson (2016)<br />
#'''''[[The Story of Civilization]] - Volume 2: The Life of Greece''''' &mdash; by Will Durant (1939)<br />
#'''''The Story of Civilization - Volume 3: Caesar and Christ''''' &mdash; by Will Durant (1944)<br />
#'''''The Story of Civilization - Volume 4: The Age of Faith''''' &mdash; by Will Durant (1950)<br />
#'''''Red Sparrow''''' &mdash; by Jason Matthews (2013)<br />
#'''''Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time''''' &mdash; by Dava Sobel (1995)<br />
#'''''The Medici: Power, Money, and Ambition in the Italian Renaissance''''' &mdash; by Paul Strathern (2016)<br />
#'''''The Venetians: A New History: From Marco Polo to Casanova''''' &mdash; by Paul Strathern (2013)<br />
#'''''The Rise of Athens: The Story of the World's Greatest Civilization''''' &mdash; by Anthony Everitt (2016)<br />
#'''''Red Mars''''' &mdash; by Kim Stanley Robinson (1993)<br />
#'''''The Clockwork Universe: Isaac Newton, The Royal Society, and the Birth of the Modern World''''' &mdash; by Edward Dolnick (2011)<br />
#'''''The Skeptics' Guide to the Universe: How to Know What's Really Real in a World Increasingly Full of Fake''''' &mdash; by Steven Novella (2018)<br />
#'''''New Thinking: From Einstein to Artificial Intelligence, the Science and Technology That Transformed Our World''''' &mdash; by Dagogo Altraide (2019)<br />
#'''''Flashpoints: The Emerging Crisis in Europe''''' &mdash; by George Friedman (2015)<br />
#'''''The War on Science: Who's Waging It, Why It Matters, What We Can Do About It''''' &mdash; by Shawn Lawrence Otto (2016)<br />
#'''''Permanent Record''''' &mdash; by Edward Snowden (2019)<br />
#'''''Mythos: The Greek Myths Reimagined''''' &mdash; by Stephen Fry (2019)<br />
#'''''Heroes: The Greek Myths Reimagined''''' &mdash; by Stephen Fry (2020)<br />
#'''''Troy: The Greek Myths Reimagined''''' &mdash; by Stephen Fry (2021)<br />
#'''''I Contain Multitudes: The Microbes Within Us and a Grander View of Life''''' &mdash; by Ed Yong (2016)<br />
#'''''How to Read a Book''''' &mdash; by Mortimer J. Adler and Charles Van Doren (1940)<br />
#'''''The Order: A Novel''''' &mdash; by Daniel Silva (2020)<br />
#'''''How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need''''' &mdash; by Bill Gates (2020)<br />
#'''''The Horse, the Wheel, and Language: How Bronze-Age Riders from the Eurasian Steppes Shaped the Modern World''''' &mdash; by David W. Anthony (2007)<br />
#'''''The Map of Knowledge: A Thousand-Year History of How Classical Ideas Were Lost and Found''''' &mdash; by Violet Moller (2019)<br />
#'''''Sapiens: A Brief History of Humankind''''' &mdash; by Yuval Noah Harari (2015)<br />
#'''''The Ascent of Money: A Financial History of the World''''' &mdash; by Niall Ferguson (2008)<br />
#'''''Civilization: The West and the Rest''''' &mdash; by Niall Ferguson (2011)<br />
#'''''Empire: How Britain Made the Modern World''''' &mdash; by Niall Ferguson (2017)<br />
#'''''The Square and the Tower: Networks and Power, from the Freemasons to Facebook''''' &mdash; by Niall Ferguson (2018)<br />
#'''''The House of Rothschild, Volume 1: Money's Prophets: 1798-1848''''' &mdash; by Niall Ferguson (2019)<br />
#'''''Doom: The Politics of Catastrophe''''' &mdash; by Niall Ferguson (2021)<br />
#'''''The Accidental Superpower: The Next Generation of American Preeminence and the Coming Global Disorder''''' &mdash; by Peter Zeihan (2014)<br />
#'''''The Strange Death of Europe: Immigration, Identity, Islam''''' &mdash; by Douglas Murray (2017)<br />
#'''''The War on the West''''' &mdash; by Douglas Murray (2022)<br />
#'''''12 Rules for Life: An Antidote to Chaos''''' &mdash; by Jordan B. Peterson (2018)<br />
#'''''The Historian''''' &mdash; by Elizabeth Kostova (2009)<br />
<br />
==Titles (textbooks)==<br />
''Note: These are some of the textbooks I not only read in their entirety whilst in university, but studied them thoroughly. This is very much an incomplete list.''<br />
<br />
#'''''X-ray Structure Determination''''' &mdash; by Stout and Jensen<br />
#'''''Inferring Phylogenies''''' &mdash; by Joseph Felsenstein, Sinauer Associates, Inc. (2003)<br />
#'''''A Biologist's Guide to Analysis of DNA Microarray Data'''''<br />
#'''''Molecular Cell Biology''''' &mdash; by Scott MP, Matsudaira P, Lodish H, Darnell J, Zipursky L, Kaiser CA, Berk A, and Krieger M. W. H. Freeman, 5th Edition (2003)<br />
#'''''Guide to Analysis of DNA Microarray Data''''' &mdash; by Knudsen S, 2nd Edition (2004)<br />
#'''''General Chemistry''''' &mdash; by Darrell D. Ebbing and Steven D. Gammon, Houghton Mifflin Company, Boston, 6th Edition (1999)<br />
#'''''Organic Chemistry''''' &mdash; by Paula Yurkanis Bruice, Prentice Hall, New Jersey, 3rd Edition (2001)<br />
#'''''Principles and Techniques for an Integrated Chemistry Laboratory''''' &mdash; by David A. Aikens, ''et. al.'', Waveland Press, Inc., Prospect Heights (1984)<br />
#'''''Physical Chemistry''''' &mdash; by Peter Atkins and Julio de Paula, W.H. Freeman and Company, New York, 7th Edition (2002)<br />
#'''''Biochemistry''''' &mdash; by Christopher K. Mathews, K. E. van Holde, and Kevin G. Ahern, Addison Wesley Longman, San Fransisco, 3rd Edition (2000)<br />
#'''''Biology''''' &mdash; by Neil A. Campbell, The Benjamin/Cummings Publishing Company, Inc., Redwood City, 5th Edition (1999)<br />
#'''''Essential Cell Biology''''' &mdash; by Bruce Alberts, ''et. al.'', Garland Publishing, Inc. New York (1998)<br />
#'''''Genetics: From Genes to Genomes''''' &mdash; by Leland H. Hartwell, ''et. al.'', McGraw-Hill Companies, Inc. Boston (2000)<br />
#'''''Evolution: An Introduction''''' &mdash; by Stephen C. Stearns and Rolf F. Hoekstra, Oxford University Press, Oxford (2000)<br />
#'''''Physics for Scientists and Engineers''''' &mdash; by Saunders College Publishing, Philadelphia, 5th Edition (2000)<br />
#'''''Physical Biochemistry''''' &mdash; by Kensal E. van Holde, W. Curtis Johnson, and P. Shing Ho, Prentice Hall, New Jersey (1998)<br />
#'''''Object-Oriented Software Development Using Java''''' &mdash; by Xiaoping Jia, Addison-Wesley, 2nd Edition<br />
#'''''Calculus''''' &mdash; by James Stewart<br />
#'''''Calculus: Early Transcendentals''''' &mdash; by James Stewart<br />
#'''''Single Variable Calculus: Early Transcendentals''''' &mdash; by James Stewart<br />
<br />
==Titles (uncategorized)==<br />
''Note: These are some of my favourite books that I have read. I have read others, but these stood out to me. This does not mean, in any way, that I necessarily agree with everything these books have to say; they just interested me.''<br />
#'''''The History of the Decline and Fall of the Roman Empire''''' &mdash; by Edward Gibbon (1776-1788) [http://www.gutenberg.org/browse/authors/g#a375][http://en.wikipedia.org/wiki/Outline_of_The_History_of_the_Decline_and_Fall_of_the_Roman_Empire]<br />
#'''''The House of Intellect''''' &mdash; by Jacques Barzun<br />
#'''''[http://librivox.org/thus-spake-zarathustra-by-friedrich-nietzsche/ Also sprach Zarathustra]''''' ("Thus Spoke Zarathustra") &mdash; by Friedrich Nietzsche (1883-5)<br />
#'''''Jenseits von Gut und Böse''''' ("Beyond Good and Evil") &mdash; by Friedrich Nietzsche (1886)<br />
#'''''Zur Genealogie der Moral''''' ("On the Genealogy of Morals") &mdash; by Friedrich Nietzsche (1887)<br />
#'''''Götzen-Dämmerung''''' ("Twilight of the Idols") &mdash; by Friedrich Nietzsche (1888)<br />
#'''''[http://librivox.org/the-antichrist-by-nietzsche/ Der Antichrist]''''' ("The Antichrist") &mdash; by Friedrich Nietzsche (1888)<br />
#'''''Ecce Homo''''' &mdash; by Friedrich Nietzsche (1888)<br />
#'''''Vom Nutzen und Nachtheil der Historie für das Leben '''''("On the Use and Abuse of History for Life") &mdash; by Friedrich Nietzsche (1874)<br />
#'''''Die Traumdeutung''''' ("The Interpretation of Dreams") &mdash; by Sigmund Freud (1899)<br />
#'''''Das Ich und das Es''''' ("The Ego and the Id") &mdash; by Sigmund Freud (1923)<br />
#'''''Die Zukunft einer Illusion''''' ("The Future of an Illusion") &mdash; by Sigmund Freud (1927) <br />
#'''''Das Unbehagen in der Kultur''''' ("Civilization and Its Discontents") &mdash; by Sigmund Freud (1929)<br />
#'''''[[:wikipedia:A History of the English-Speaking Peoples|A History of the English-Speaking Peoples]]''''' &mdash; by Winston Churchill (1956–58)<br />
#'''''The Notebooks of Don Rigoberto''''' &mdash; by Mario Vargas Llosa<br />
#'''''Die Waffen nieder!''''' ("Lay Down Your Arms!") &mdash; Baroness Bertha von Suttner (1889)<br />
#'''''Europe's Optical Illusion''''' (also: "The Great Illusion") &mdash; Sir Norman Angell (1909)<br />
#'''''Night''''' &mdash; by Elie Wiesel (1960)<br />
#'''''The End of Faith: Religion, Terror, and the Future of Reason''''' &mdash; by Sam Harris<br />
#'''''The Lexus and the Olive Tree: Understanding Globalization''''' &mdash; by Thomas L. Friedman<br />
#'''''The World Is Flat: A Brief History of the Twenty-first Century''''' &mdash; Thomas L. Friedman<br />
#'''''The Case For Goliath: How America Acts As The World's Government in the Twenty-first Century''''' &mdash; by Michael Mandelbaum<br />
#'''''Caesar's Commentaries: On the Gallic War And on the Civil War''''' &mdash; by Julius Caesar<br />
#'''''Cem Escovadas Antes de Ir para Cama''''' ("One Hundred Strokes of the Brush before Bed") &mdash; by Melissa Panarello<br />
#'''''Coryat's Crudities: Hastily gobled up in Five Moneth's Travels''''' &mdash; by Thomas Coryat (1611)<br />
#'''''Italian Hours''''' &mdash; by Henry James (1909)<br />
#'''''Italienische Reise''''' ("Italian Journey") &mdash; by Johann Wolfgang von Goethe (1816/1817).<br />
#'''''Diarios de motocicleta''''' ("The Motorcycle Diaries") &mdash; by Che Guevara (1951).<br />
#'''''The Prince of Tides''''' &mdash; by Pat Conroy (1986).<br />
#'''''Il Nome Della Rosa''''' ("The Name of the Rose") &mdash; by Umberto Eco (1980).<br />
#'''''Il Pendolo di Foucault''''' ("Foucault's Pendulum") &mdash; by Umberto Eco (1988).<br />
#'''''The Book of the Courtier''''' ("Il Cortegiano") &mdash; by Baldassare Castiglione (1528) [http://en.wikipedia.org/wiki/Sprezzatura].<br />
#'''''One Hundred Years of Solitude''''' &mdash; by Gabriel Garcia Marquez<br />
#'''''The Unbearable Lightness of Being: A Novel''''' &mdash; by Milan Kundera<br />
#'''''The Book of Laughter and Forgetting''''' &mdash; by Milan Kundera<br />
#'''''Masters of Rome''''' (series) &mdash; by Colleen McCullough<br />
#'''''The Wishing Game''''' &mdash; by Patrick Redmond<br />
#'''''The Measure Of All Things: The Seven-Year Odyssey and Hidden Error That Transformed the World''''' &mdash; by By Ken Alder (2002)<br />
#'''''De la démocratie en Amérique''''' ("On Democracy in America") &mdash; by Alexis de Tocqueville (1835)<br />
#'''''The Anatomy of Revolution''''' &mdash; by Crane Brinton (1938)<br />
#'''''God and Gold: Britain, America, and the Making of the Modern World''''' &mdash; by Walter Russell Mead (2007)<br />
#'''''Black Mass: Apocalyptic Religion and the Death of Utopia''''' &mdash; by John Gray (2007)<br />
#'''''The Grand Chessboard: American Primacy and Its Geostrategic Imperatives''''' &mdash; by Zbigniew Brzezinski (1998)<br />
#'''''Kim''''' &mdash; by Rudyard Kipling (1901)<br />
#'''''The Lotus and the Wind''''' &mdash; by John Masters<br />
<br />
==Authors (uncategorized)==<br />
*[[wikipedia:Aldous Huxley|Aldous Huxley]] &mdash; [[Wikiquote:Aldous Huxley]]<br />
*[[wikipedia:Edgar Allen Poe|Edgar Allen Poe]] &mdash; [[Wikiquote:Edgar Allen Poe]]<br />
*[[wikipedia:Oscar Wilde|Oscar Wilde]] &mdash; [[Wikiquote:Oscar Wilde]]<br />
*[[wikipedia:George Orwell|George Orwell]] &mdash; [[Wikiquote:George Orwell]]<br />
*[[wikipedia:William Shakespeare|William Shakespeare]] &mdash; [[Wikiquote:William Shakespeare]]<br />
*[[wikipedia:Thomas Jefferson|Thomas Jefferson]] &mdash; [[Wikiquote:Thomas Jefferson]]<br />
*[[wikipedia:Mark Antony|Mark Antony]] &mdash; [[Wikiquote:Mark Antony]]<br />
*[[wikipedia:Jane Austen|Jane Austen]] &mdash; [[Wikiquote:Jane Austen]] ([http://en.wikipedia.org/wiki/Free_indirect_speech])<br />
*[[wikipedia:Albert Einstein|Albert Einstein]] &mdash; [[Wikiquote:Albert Einstein]]<br />
*[[Friedrich Nietzsche]] &mdash; [[Wikiquote:Friedrich Nietzsche]]<br />
*[[wikipedia:Sigmund Freud|Sigmund Freud]] &mdash; [[Wikiquote:Sigmund Freud]]<br />
*[[wikipedia:Plato|Plato]] &mdash; [[Wikiquote:Plato]]<br />
*[[wikipedia:Aristotle|Aristotle]] &mdash; [[Wikiquote:Aristotle]]<br />
*[[wikipedia:Baruch Spinoza|Baruch Spinoza]] (Benedictus de Spinoza; 1632–1677) &mdash; [[Wikiquote:Baruch Spinoza]]<br />
*[[wikipedia:Georg Wilhelm Friedrich Hegel|Georg Wilhelm Friedrich Hegel]] &mdash; [[Wikiquote:Georg Wilhelm Friedrich Hegel]]<br />
*[[wikipedia:Niccolò Machiavelli|Niccolò Machiavelli]] &mdash; [[Wikiquote:Niccolò Machiavelli]]<br />
*[[wikipedia:Immanuel Kant|Immanuel Kant]] &mdash; [[Wikiquote:Immanuel Kant]]<br />
*[[wikipedia:Lord Byron|Lord Byron]] (George Gordon Byron, 6th Baron Byron) &mdash; [[Wikiquote:Lord Byron]]<br />
*[[wikipedia:Mary Shelley|Mary Shelley]] &mdash; [[Wikiquote:Mary Shelley]]<br />
*[[wikipedia:Percy Bysshe Shelley|Percy Bysshe Shelley]] &mdash; [[Wikiquote:Percy Bysshe Shelley]]<br />
*[[wikipedia:Christopher Marlowe|Christopher Marlowe]] (1564–1593): English dramatist and poet. &mdash; [[Wikiquote:Christopher Marlowe]]<br />
*[[wikipedia:Francis Bacon|Francis Bacon]] &mdash; [[Wikiquote:Francis Bacon]]<br />
*[[wikipedia:Eric Hoffer|Eric Hoffer]] &mdash; [[Wikiquote:Eric Hoffer]]<br />
*[[wikipedia:Milton Friedman|Milton Friedman]] &mdash; [[Wikiquote:Milton Friedman]]<br />
*[[wikipedia:Roger Bacon|Roger Bacon]] (c. 1214-1294) &mdash; [[wikiquote:Roger Bacon]]<br />
*[[wikipedia:Charles Baudelaire|Charles Baudelaire]] (1821-1867) &mdash; [[wikiquote:Charles Baudelaire]]<br />
<br />
=== Authors (I have not read yet) ===<br />
* [[wikipedia:Simone De Beauvoir|Simone De Beauvoir]] (1908–1986): French existentialist, writer, and social essayist. (Author of ''The Necessity of Atheism'' [http://www.spartacus.schoolnet.co.uk/PRshelley.htm].)<br />
* [[wikipedia:Jeremy Bentham|Jeremy Bentham]] (1748–1832): British jurist, eccentric, philosopher and social reformer, founder of utilitarianism. He had [[wikipedia:John Stuart Mill|John Stuart Mill]] as his disciple. (Quoted as saying "The spirit of dogmatic theology poisons anything it touches". ~ [http://www.positiveatheism.org/hist/quotes/quote-b0.htm].)<br />
* [[wikipedia:Albert Camus|Albert Camus]] (1913–1960): French philosopher and novelist, a luminary of existentialism.<br />
* [[wikipedia:Auguste Comte|Auguste Comte]] (1798–1857): French philosopher, considered the father of sociology. (Quoted as saying "The heavens declare the glory of Kepler and Newton". ~ [http://www.positiveatheism.org/hist/quotes/quote-c3.htm].)<br />
* [[wikipedia:André Comte-Sponville|André Comte-Sponville]] (1952–): French materialist philosopher.<br />
* [[wikipedia:Baron d'Holbach|Paul Henry Thiry, Baron d'Holbach]] (1723–1789): French homme de lettres, philosopher and encyclopedist, member of the philosophical movement of French materialism, attacked Christianity and religion as counter to the moral advancement of humanity.<br />
* [[wikipedia:Marquis de Condorcet|Marquis de Condorcet]] (1743–1794): French philosopher and mathematician of the Enlightenment.<br />
* [[wikipedia:Daniel Dennett|Daniel Dennett]] (1942–): American philosopher, leading figure in evolutionary biology and cognitive science, well-known for his book ''[[wikipedia:Darwin's Dangerous Idea|Darwin's Dangerous Idea]]''.<br />
* [[wikipedia:Denis Diderot|Denis Diderot]] (1713–1784): French philosopher, author, editor of the first encyclopedia. Known for the quote "Man will never be free until the last king is strangled with the entrails of the last priest".<br />
* [[wikipedia:Ludwig Andreas Feuerbach|Ludwig Andreas Feuerbach]] (1804–1872): German philosopher, postulated that God is merely a projection by humans of their own best qualities.<br />
* [[wikipedia:Paul Kurtz|Paul Kurtz]] (1926–): American philosopher, skeptic, founder of Committee for the Scientific Investigation of Claims of the Paranormal (CSICOP) and the Council for Secular Humanism.<br />
* [[wikipedia:Karl Popper|Sir Karl Popper]] (1902–1994): Austrian-born British philosopher of science, who claimed that empirical falsifiability should be the criterion for distinguishing scientific theory from non-science.<br />
* [[wikipedia:Richard Rorty|Richard Rorty]] (1931–): American philosopher, whose ideas combine pragmatism with a [[wikipedia:Ludwig Wittgenstein|Wittgensteinian]] ontology that declares that meaning is a social-linguistic product of dialogue. He actually rejects the theist/atheist dichotomy and prefers to call himself "anti-clerical".<br />
* [[wikipedia:Bertrand Russell|Bertrand Russell, 3rd Earl Russell]], (1872–1970): British mathematician, philosopher, logician, political liberal, activist, popularizer of philosophy, and 1950 Nobel Laureate in Literature. On the issue of atheism/agnosticism, he wrote the essay "[[wikipedia:Why I Am Not a Christian|Why I Am Not a Christian]]".<br />
* [[wikipedia:Jean-Paul Sartre|Jean-Paul Sartre]] (1905–1980): French existentialist philosopher, dramatist, novelist and critic.<br />
* [[wikipedia:Peter Singer|Peter Singer]] (1946–): Australian philosopher and teacher, working on practical ethics from a utilitarian perspective, controversial for his opinions on abortion and euthanasia.<br />
* [[wikipedia:James Lovelock|James Lovelock]] (1919-) [[wikiquote:James Lovelock]]<br />
<br />
==External links==<br />
*[http://www.gutenberg.org/browse/scores/top Top 100 - Project Gutenberg]<br />
*[http://www.randomhouse.com/modernlibrary/100talkingpoints.html The Modern Library - 100 Best - Talking Points]<br />
*[http://www.randomhouse.com/modernlibrary/100bestnonfiction.html The Modern Library - 100 Best - Nonfiction]<br />
*[http://www.randomhouse.com/modernlibrary/100bestnovels.html The Modern Library - 100 Best - Novels]<br />
*[http://www.nytimes.com/pages/books/bestseller/ NY Times Best-Seller Lists]<br />
*[http://www.bookmooch.com/ BookMooch] &mdash; a free book trade and exchange community<br />
*[http://www.bookcrossing.com/ BookCrossing] &mdash; a free book club<br />
*[http://www.nndb.com/ Notable Names Database] (NNDB) &mdash; an online database of biographical details of notable people.<br />
*[http://wikisummaries.org/Main_Page WikiSummaries] &mdash; provides free book summaries<br />
*[http://www.fullbooks.com/ fullbooks.com]<br />
*[http://www.themodernword.com/eco/eco_writings.html Umberto Eco: His Own Writings]<br />
*[http://www.ulib.org/ UDL: Universal Digital Library] &mdash; has over 1.5 million books digitised.<br />
*[[wikipedia:List of historical novels]]<br />
<br />
{{stub}}</div>Christophhttp://wiki.christophchamp.com/index.php?title=Analogue_synthesizers&diff=8242Analogue synthesizers2022-09-25T23:23:39Z<p>Christoph: /* Electronic oscillators */</p>
<hr />
<div>An '''analogue''' (or '''analog''') '''synthesizer''' is a synthesizer that uses analogue circuits and analogue signals to generate sound electronically.<br />
<br />
==Electronic oscillators==<br />
* Voltage-controlled oscillator (VCO)<br />
* Low-frequency oscillation (LFO)<br />
* Numerically-controlled oscillator (NCO)<br />
* Variable-frequency oscillator (VFO)<br />
* Variable-gain amplifier<br />
* Voltage-controlled filter (VCF)<br />
* Modular synthesizer<br />
<br />
==Glossary==<br />
* [[:Wikipedia:CV/gate|CV/gate]] &mdash; (an abbreviation of control voltage/gate) is an analogue method of controlling synthesizers, drum machines, and similar equipment with external sequencers. The control voltage typically controls pitch and the gate signal controls note on-off.<br />
<br />
[[Category:Hobbies]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Raspberry_Pi&diff=8241Raspberry Pi2022-09-24T22:57:15Z<p>Christoph: /* Tools */</p>
<hr />
<div>This article will be all about my '''Raspberry Pi''' projects.<br />
<br />
==Common commands==<br />
<br />
* Find Raspberry Pi IP address on your local WiFi network:<br />
$ sudo nmap -sP 10.0.0.0/24 | awk '/^Nmap/{ip=$NF}/B8:27:EB/{print ip}'<br />
<br />
#~OR~<br />
<br />
IFACE=eth0<br />
# trigger IPv6 neighbour discovery with link-local scope multicast:<br />
ping6 -c2 -I $IFACE ff02::1 > /dev/null<br />
# print the results, filtered by MAC address vendor prefix of Raspberry Pi Foundation:<br />
ip -6 neigh | grep b8:27:eb<br />
<br />
#~OR~<br />
<br />
$ arp-scan --interface=eth0 --localnet | grep b8:27:eb<br />
<br />
* Store the following in your <code>~/.ssh/config</code> file:<br />
<pre><br />
Host rpi<br />
HostName 10.x.x.x<br />
User pi<br />
ProxyCommand none<br />
TCPKeepAlive yes<br />
ServerAliveInterval 5<br />
PubkeyAuthentication no<br />
PreferredAuthentications keyboard-interactive,password<br />
</pre><br />
<br />
Then,<br />
$ ssh rpi<br />
<br />
Or, better yet, use SSH keys.<br />
<br />
* Find out where your Raspberry Pi was made and other details about the hardware:<br />
<pre><br />
$ cat /proc/cpuinfo | grep -E '^Hardware|^Revision|^Serial'<br />
Hardware : BCM2835<br />
Revision : a22082<br />
Serial : 0000000038e10351<br />
<br />
# ~OR~<br />
<br />
$ cat /proc/cpuinfo | grep -E '^Hardware|^Revision|^Serial'<br />
Hardware : BCM2711<br />
Revision : d03114<br />
Serial : 10000000ecaf3b49<br />
</pre><br />
<br />
Then, go [https://elinux.org/RPi_HardwareHistory here] or [https://www.raspberrypi.org/documentation/hardware/raspberrypi/revision-codes/README.md here] and, using the above hardware/revision codes, find out where you RPi was made.<br />
<br />
So, in my case, I have the following:<br />
* '''Raspberry Pi 3 Model B''' (1 GB) manufactured by Embest in 2016 (Q1).<br />
* '''Raspberry Pi 4 Model B''' (8 GB) manufactured by Sony in 2020 (Q2).<br />
<br />
===32-bit or 64-bit===<br />
<br />
$ arch || uname -a<br />
armv7l # <- 32-bit => ARMv7 Processor rev 4 (v7l)<br />
armv8 # <- 64-bit => ARMv8 Processor<br />
<br />
<pre><br />
$ tr '\0' '\n' </proc/device-tree/model;arch<br />
Raspberry Pi 3 Model B Rev 1.2<br />
armv7l<br />
$ tr '\0' '\n' </proc/device-tree/model;arch<br />
Raspberry Pi 4 Model B Rev 1.4<br />
armv7l<br />
</pre><br />
<br />
$ getconf LONG_BIT<br />
32 # <- 32-bit<br />
64 # <- 64-bit<br />
<br />
$ dpkg --print-architecture<br />
armhf<br />
<br />
===Throttling===<br />
<!-- https://harlemsquirrel.github.io/shell/2019/01/05/monitoring-raspberry-pi-power-and-thermal-issues.html --><br />
<br />
$ vcgencmd get_throttled<br />
<br />
* [https://github.com/raspberrypi/firmware/commit/404dfef3b364b4533f70659eafdcefa3b68cd7ae source]:<br />
<pre><br />
111100000000000001010<br />
|||| ||||_ under-voltage<br />
|||| |||_ currently throttled<br />
|||| ||_ arm frequency capped<br />
|||| |_ soft temperature reached<br />
||||_ under-voltage has occurred since last reboot<br />
|||_ throttling has occurred since last reboot<br />
||_ arm frequency capped has occurred since last reboot<br />
|_ soft temperature reached since last reboot<br />
</pre><br />
<br />
===Over-clocking===<br />
<pre><br />
$ sudo cat /sys/devices/system/cpu/cpufreq/policy0/*<br />
0 1 2 3<br />
600000<br />
1200000<br />
600000<br />
355000<br />
0 1 2 3<br />
600000 1200000 <br />
conservative ondemand userspace powersave performance schedutil <br />
600000<br />
BCM2835 CPUFreq<br />
ondemand<br />
1200000<br />
600000<br />
<unsupported><br />
</pre><br />
<br />
===Video===<br />
<br />
* Capture a 10 seconds video with your camera module:<br />
<pre><br />
$ raspivid -o video.h264 -t 10000<br />
</pre><br />
<br />
==Useful commands==<br />
<br />
* Check which network the wireless adaptor is using:<br />
$ iwconfig<br />
* Print a list of the currently available wireless networks:<br />
$ iwlist wlan0 scan<br />
* Show details about the device's memory:<br />
$ cat /proc/meminfo<br />
* Show the size and number of partitions on the SD card or hard drive:<br />
$ cat /proc/partitions<br />
* Show which version of the Raspberry Pi you are using:<br />
$ cat /proc/version<br />
* Show all of the installed packages that are related to XXX:<br />
$ dpkg --get-selections | grep XXX<br />
* Show all of the installed packages:<br />
$ dpkg --get-selections<br />
* Show the IP address of the Raspberry Pi:<br />
$ hostname -I<br />
* List USB hardware connected to the Raspberry Pi:<br />
$ lsusb<br />
* Show the temperature of the CPU:<br />
$ vcgencmd measure_temp<br />
* Show the memory split between the CPU and GPU:<br />
$ vcgencmd get_mem arm && vcgencmd get_mem gpu<br />
* Display GPIO pinout (GUI-only):<br />
$ pinout<br />
<br />
==GPIO==<br />
<br />
* Light up an LED:<br />
<pre><br />
$ sudo -i<br />
# Use GPIO pin 27 by creating a virtual file:<br />
$ echo "27" > /sys/class/gpio/export<br />
# Set pin 27 to ''out'' mode (allows us to turn on/off):<br />
$ echo "out" > /sys/class/gpio/gpio27/direction<br />
# Turn pin on/off:<br />
$ echo "1" > /sys/class/gpio/gpio27/value<br />
$ echo "0" > /sys/class/gpio/gpio27/value<br />
$ exit<br />
</pre><br />
<br />
===I2C===<br />
<br />
<pre><br />
$ sudo apt-get install -y python-smbus i2c-tools<br />
</pre><br />
<br />
* If you know an I2C device is connected to your RPi, but you do not know its 7-bit I2C address, use the following command to find it:<br />
<pre><br />
$ sudo i2cdetect -y 0<br />
</pre><br />
<br />
This will search <code>/dev/i2c-0</code> for all address, and if an MCP4725 DAC breakout is properly connected and it is set to its default address it should show up at <code>0x62</code>.<br />
<br />
If you are using a 512MB Raspberry Pi version 2, you will need to use <code>/dev/i2c-1</code> by running:<br />
<pre><br />
$ sudo i2cdetect -y 1 # as the i2c port number changed from #0 to #1<br />
</pre><br />
<br />
==Compute Modules==<br />
<br />
<div style="float:left; margin:0px 20px 20px 0px;"><br />
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"<br />
|-<br />
! colspan="4" bgcolor="#EFEFEF" | '''Part number options'''<br />
|-align="center" bgcolor="#1188ee"<br />
!Model<br />
!Wireless<br />
!RAM LPDDR4<br />
!eMMC Storage<br />
|- align="left"<br />
|'''CM4''' || 0 = No || 01 = 1 GB || 000 = 0 GB (Lite)<br />
|-<br />
| || 1 = Yes || 02 = 2 GB || 008 = 8 GB<br />
|-<br />
| || || 04 = 4 GB || 016 = 16 GB<br />
|-<br />
| || || 08 = 8 GB || 032 = 32 GB<br />
|}<br />
<br />
Example Part Number: '''CM4102032''': Raspberry Pi Compute Module 4, 2GB RAM, 32GB eMMC, Wireless, BCM2711, ARM Cortex-A72, RPL#SC0670B<br />
<br />
==Miscellaneous==<br />
<br />
* Commands to remove Microsoft's repo and GPG key from your Pi:<br />
<pre><br />
$ sudo rm /etc/apt/sources.list.d/vscode.list<br />
$ sudo rm /etc/apt/trusted.gpg.d/microsoft.gpg<br />
$ sudo apt update<br />
</pre><br />
<br />
Use [https://vscodium.com/ VSCodium] instead.<br />
<br />
* Get the current pull:<br />
<pre><br />
$ cat /sys/devices/platform/rpi-poe-power-supply@0/power_supply/rpi-poe/current_now<br />
601000<br />
# Note:<br />
# 60100uA = 0.6A<br />
# 0.6A @ 5V (nominal) = 3W (P = IV)<br />
</pre><br />
<br />
* Turn a specified GPIO pin (e.g., pin 23) on/off:<br />
<pre><br />
echo "23" > /sys/class/gpio/export<br />
echo "out" > /sys/class/gpio/gpio23/direction<br />
echo "1" > /sys/class/gpio/gpio23/value<br />
echo "0" > /sys/class/gpio/gpio23/value<br />
</pre><br />
<br />
* Setup HDMI HotSwap:<br />
<pre><br />
$ vi /boot/config.txt<br />
hdmi_force_hotplug=1<br />
</pre><br />
<br />
* Real-time update for added HDD:<br />
<pre><br />
$ echo 1 | sudo tee /sys/bus/pci/rescan<br />
</pre><br />
<br />
==Memory==<br />
<br />
* 1GB: 4HBMGCJ<br />
* 2GB: D9WHZ<br />
* 4GB: D9WHV<br />
* 8GB: D9ZCL<br />
<br />
==Turing Pi v2==<br />
<br />
SEE: https://turingpi.com/turing-pi-v2-is-here/<br />
<br />
; Specs:<br />
* Mini ITX standard<br />
* 4x Nodes<br />
* Managed Switch, VLAN<br />
* HDMI<br />
* 2x Mini PCIe Gen2<br />
* 2x SATA III 6 Gbps<br />
* 2x 1 Gbps Ethernet <br />
* 4x USB 3.0 (2x Front / 2x Back)<br />
* GPIO 40-pin (RPi compatible)<br />
* 24-pin ATX power<br />
; Removed:<br />
* 4x Node Fan connector<br />
* 3x GPIO 40-pin<br />
* Audio-out 3.5mm<br />
; Added:<br />
* Nvidia Jetson Support<br />
* Board Management Controller with remote access<br />
* System Fan connector<br />
<br />
<pre><br />
---<br />
slot_1:<br />
- HDMI<br />
- GPIO<br />
- mPCIe<br />
slot_2:<br />
- mPCIe<br />
slot_3:<br />
- SATA<br />
slot_4:<br />
- USB<br />
</pre><br />
<br />
==External links==<br />
* [https://www.raspberrypi.org/ Official website]<br />
* [https://rpilocator.com/ rpilocator]<br />
<br />
===GPIO===<br />
* [https://pinout.xyz/ Interactive GPIO Pinout guide for the Raspberry Pi]<br />
* [http://rasp.io/portsplus/ Pinout PCB]<br />
* [https://github.com/splitbrain/rpibplusleaf Printable Pinout]<br />
<br />
===Tools===<br />
* [https://www.raspberrypi.com/documentation/accessories/camera.html libcamera]<br />
* [https://github.com/cz172638/v4l-utils video4linux] (<code>apt install v4l-utils</code>)<br />
* [https://github.com/billw2/rpi-clone A shell script to clone a booted disk]<br />
<br />
===Alternative OSes===<br />
<br />
* [https://sourceforge.net/projects/openmediavault/files/ openmediavault]<br />
* [https://ichigojam.github.io/RPi/ IchigoJam BASIC]<br />
* [https://volumio.org/ Volumio]<br />
* [https://blitterstudio.com/amiberry/ Amiberry]<br />
* [https://www.riscosopen.org/content/downloads/raspberry-pi RiscOS]<br />
* [https://github.com/FydeOS/chromium_os-raspberry_pi ChromiumOS]<br />
** [https://www.iottechtrends.com/install-chromium-os-on-raspberry-pi/ ChromiumOS Tutorial]<br />
* [https://github.com/sakaki-/gentoo-on-rpi-64bit Genpi64]<br />
* [https://retropie.org.uk/download/ RetroPie]<br />
<br />
===PCIe devices===<br />
* [https://pipci.jeffgeerling.com/ Raspberry Pi PCIe Devices] &mdash; by Jeff Geerling<br />
<br />
===Cases===<br />
* [https://www.jeffgeerling.com/blog/2021/argon-one-m2-raspberry-pi-ssd-case-review Argon One M.2 Raspberry Pi SSD Case Review] &mdash; by Jeff Geerling<br />
<br />
===Miscelleaneous===<br />
* [https://www.raspberrypi.org/documentation/hardware/raspberrypi/power/README.md Power supply]<br />
* [https://chromium.github.io/octane/ Octane 2.0 JavaScript Benchmark]<br />
<br />
[[Category:Electronics]]<br />
[[Category:Technical and Specialized Skills]]<br />
[[Category:Hobbies]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Analogue_synthesizers&diff=8240Analogue synthesizers2022-09-22T22:06:41Z<p>Christoph: Created page with "An '''analogue''' (or '''analog''') '''synthesizer''' is a synthesizer that uses analogue circuits and analogue signals to generate sound electronically. ==Electronic oscilla..."</p>
<hr />
<div>An '''analogue''' (or '''analog''') '''synthesizer''' is a synthesizer that uses analogue circuits and analogue signals to generate sound electronically.<br />
<br />
==Electronic oscillators==<br />
* Voltage-controlled oscillator (VCO)<br />
* Low-frequency oscillation (LFO)<br />
* Numerically-controlled oscillator (NCO)<br />
* Variable-frequency oscillator (VFO)<br />
* Variable-gain amplifier<br />
* Voltage-controlled filter (VCF)<br />
* Modular synthesizer<br />
<br />
[[Category:Hobbies]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Hobbies&diff=8239Category:Hobbies2022-09-22T22:04:12Z<p>Christoph: </p>
<hr />
<div>I have a great number of hobbies and am always adding more. Below is an ''incomplete'' list of my personal hobbies:<br />
<br />
*[[:Category:Electronics|Electronics]]<br />
**3D printing<br />
**Internet of Things (IoT)<br />
**Arduino<br />
**[[Raspberry Pi]]<br />
**ESP8266/ESP32<br />
**[[Analogue synthesizers]]<br />
*[[:Category:Chess|Chess]]<br />
*Currency<br />
**[[:Category:Coin Collection|Coin collecting]]<br />
**[[:Category:Banknote Collection|Banknote collecting]]<br />
*[[:Category:Stamp collecting|Stamp collecting]]<br />
*[[:Category:Wine and Gourmet Foods|Wine and Gourmet Foods]] (including wine labels and wine tasting)<br />
*[[:Category:Cameras and Photography|Cameras and Photography]] (including darkroom)<br />
*[[:Category:Watches|Watches]] (horology)<br />
*[[:Category:Astronomy|Amateur astronomy]]<br />
*[[:Category:Mountain climbing|Mountain climbing]]<br />
*[[:Category:Genealogy|Genealogy]]<br />
*[[:Category:DIY|DIY]] (aka do-it-yourself)<br />
*[[Personal records]]<br />
*[[Vexillology]] &mdash; the scholarly study of flags<br />
*[[:Category:Word collecting|Word collecting]] &mdash; Yes. I am a geek.<br />
*[[:Category:History|European history]]<br />
<br />
==External links==<br />
*[http://www.symbols.com/ Symbols.com] &mdash; Online Encyclopedia of Western Signs and Ideograms<br />
*[http://www.musictheory.net/ Ricci Adams' Musictheory.net] &mdash; a good place to learn about Music Theory.<br />
<br />
[[Category:Personal]]</div>Christophhttp://wiki.christophchamp.com/index.php?title=Category:Hobbies&diff=8238Category:Hobbies2022-09-22T22:03:33Z<p>Christoph: </p>
<hr />
<div>I have a great number of hobbies and am always adding more. Below is an ''incomplete'' list of my personal hobbies:<br />
<br />
*[[:Category:Electronics|Electronics]]<br />
**3D printing<br />
**Internet of Things (IoT)<br />
**Arduino<br />
**[[Raspberry Pi]]<br />
**ESP8266/ESP32<br />
**[[Analog synthesizers]]<br />
*[[:Category:Chess|Chess]]<br />
*Currency<br />
**[[:Category:Coin Collection|Coin collecting]]<br />
**[[:Category:Banknote Collection|Banknote collecting]]<br />
*[[:Category:Stamp collecting|Stamp collecting]]<br />
*[[:Category:Wine and Gourmet Foods|Wine and Gourmet Foods]] (including wine labels and wine tasting)<br />
*[[:Category:Cameras and Photography|Cameras and Photography]] (including darkroom)<br />
*[[:Category:Watches|Watches]] (horology)<br />
*[[:Category:Astronomy|Amateur astronomy]]<br />
*[[:Category:Mountain climbing|Mountain climbing]]<br />
*[[:Category:Genealogy|Genealogy]]<br />
*[[:Category:DIY|DIY]] (aka do-it-yourself)<br />
*[[Personal records]]<br />
*[[Vexillology]] &mdash; the scholarly study of flags<br />
*[[:Category:Word collecting|Word collecting]] &mdash; Yes. I am a geek.<br />
*[[:Category:History|European history]]<br />
<br />
==External links==<br />
*[http://www.symbols.com/ Symbols.com] &mdash; Online Encyclopedia of Western Signs and Ideograms<br />
*[http://www.musictheory.net/ Ricci Adams' Musictheory.net] &mdash; a good place to learn about Music Theory.<br />
<br />
[[Category:Personal]]</div>Christoph