Quantcast
Channel: DevOps tips & tricks
Viewing all 181 articles
Browse latest View live

Deploying microservices to Kubernetes using OpenLiberty

$
0
0

OpenLiberty sample github
https://github.com/OpenLiberty/guide-kubernetes-intro

OpenLiberty tutorial
https://openliberty.io/guides/kubernetes-intro.html#what-youll-learn

MiniKubernetes installation guide
https://github.com/kubernetes/minikube#installation

Minikube Linux install
https://minikube.sigs.k8s.io/docs/start/linux/

Fedora Getting started with virtualization
https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-virtualization/


Dependencies resolved. 
==================================================================================================================================================
 Package                                      Architecture                Version                               Repository                   Size
==================================================================================================================================================
Installing group/module packages:
 virt-install                                 noarch                      2.2.1-2.fc31                          fedora                       64 k
 virt-manager                                 noarch                      2.2.1-2.fc31                          fedora                      543 k
 virt-viewer                                  x86_64                      8.0-3.fc31                            fedora                      404 k
Installing dependencies:
 autogen-libopts                              x86_64                      5.18.16-3.fc31                        fedora                       75 k
 gnutls-dane                                  x86_64                      3.6.10-1.fc31                         fedora                       27 k
 gnutls-utils                                 x86_64                      3.6.10-1.fc31                         fedora                      341 k
 libgovirt                                    x86_64                      0.3.4-9.fc30                          fedora                       71 k
 libvirt-bash-completion                      x86_64                      5.6.0-4.fc31                          fedora                       12 k
 libvirt-client                               x86_64                      5.6.0-4.fc31                          fedora                      343 k
 python3-libvirt                              x86_64                      5.6.0-1.fc31                          fedora                      294 k
 virt-manager-common                          noarch                      2.2.1-2.fc31                          fedora                      1.0 M
Installing Groups:
 Virtualization            


dave@localhost finish]$ sudo systemctl start libvirtd
[dave@localhost finish]$ sudo systemctl enable libvirtd
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd.socket → /usr/lib/systemd/system/libvirtd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd-ro.socket → /usr/lib/systemd/system/libvirtd-ro.socket.
[dave@localhost finish]$ lsmod | grep kvm
kvm_intel 299008 0
kvm 770048 1 kvm_intel
irqbypass 16384 1 kvm


Some errors appear in virt validation

[dave@localhost finish]$ virt-host-validate
QEMU: Checking for hardware virtualization : PASS
QEMU: Checking if device /dev/kvm exists : PASS
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'cpu' controller support : WARN (Enable 'cpu' in kernel Kconfig file or mount/enable cgroup controller in your system)
QEMU: Checking for cgroup 'cpuacct' controller support : PASS
QEMU: Checking for cgroup 'cpuset' controller support : WARN (Enable 'cpuset' in kernel Kconfig file or mount/enable cgroup controller in your system)
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'devices' controller support : WARN (Enable 'devices' in kernel Kconfig file or mount/enable cgroup controller in your system)
QEMU: Checking for cgroup 'blkio' controller support : WARN (Enable 'blkio' in kernel Kconfig file or mount/enable cgroup controller in your system)
QEMU: Checking for device assignment IOMMU support : PASS
QEMU: Checking if IOMMU is enabled by kernel : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
LXC: Checking for Linux >= 2.6.26 : PASS
LXC: Checking for namespace ipc : PASS
LXC: Checking for namespace mnt : PASS
LXC: Checking for namespace pid : PASS
LXC: Checking for namespace uts : PASS
LXC: Checking for namespace net : PASS
LXC: Checking for namespace user : PASS
LXC: Checking for cgroup 'cpu' controller support : FAIL (Enable 'cpu' in kernel Kconfig file or mount/enable cgroup controller in your system)
LXC: Checking for cgroup 'cpuacct' controller support : PASS
LXC: Checking for cgroup 'cpuset' controller support : FAIL (Enable 'cpuset' in kernel Kconfig file or mount/enable cgroup controller in your system)
LXC: Checking for cgroup 'memory' controller support : PASS
LXC: Checking for cgroup 'devices' controller support : FAIL (Enable 'devices' in kernel Kconfig file or mount/enable cgroup controller in your system)
LXC: Checking for cgroup 'freezer' controller support : FAIL (Enable 'freezer' in kernel Kconfig file or mount/enable cgroup controller in your system)
LXC: Checking for cgroup 'blkio' controller support : FAIL (Enable 'blkio' in kernel Kconfig file or mount/enable cgroup controller in your system)
LXC: Checking if device /sys/fs/fuse/connections exists : PASS


[root@localhost ~]# sudo dnf config-manager --add-repo=https://download.docker.com/linux/fedora/docker-ce.repo

Adding repo from: https://download.docker.com/linux/fedora/docker-ce.repo

 


After Docker installtion all tests pass

[root@localhost ~]# virt-host-validate

  QEMU: Checking for hardware virtualization                                 : PASS

  QEMU: Checking if device /dev/kvm exists                                   : PASS

  QEMU: Checking if device /dev/kvm is accessible                            : PASS

  QEMU: Checking if device /dev/vhost-net exists                             : PASS

  QEMU: Checking if device /dev/net/tun exists                               : PASS

  QEMU: Checking for cgroup 'cpu' controller support                         : PASS

  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS

  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS

  QEMU: Checking for cgroup 'memory' controller support                      : PASS

  QEMU: Checking for cgroup 'devices' controller support                     : PASS

  QEMU: Checking for cgroup 'blkio' controller support                       : PASS

  QEMU: Checking for device assignment IOMMU support                         : PASS

  QEMU: Checking if IOMMU is enabled by kernel                               : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)

   LXC: Checking for Linux >= 2.6.26                                         : PASS

   LXC: Checking for namespace ipc                                           : PASS

   LXC: Checking for namespace mnt                                           : PASS

   LXC: Checking for namespace pid                                           : PASS

   LXC: Checking for namespace uts                                           : PASS

   LXC: Checking for namespace net                                           : PASS

   LXC: Checking for namespace user                                          : PASS

   LXC: Checking for cgroup 'cpu' controller support                         : PASS

   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS

   LXC: Checking for cgroup 'cpuset' controller support                      : PASS

   LXC: Checking for cgroup 'memory' controller support                      : PASS

   LXC: Checking for cgroup 'devices' controller support                     : PASS

   LXC: Checking for cgroup 'freezer' controller support                     : PASS

   LXC: Checking for cgroup 'blkio' controller support                       : PASS

   LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS

 


Start minukube

[dave@localhost ~]$ minikube start

😄  minikube v1.5.2 on Fedora 31

✨  Automatically selected the 'kvm2' driver (alternates: [none])

💾  Downloading driver docker-machine-driver-kvm2:

    > docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s

    > docker-machine-driver-kvm2: 13.87 MiB / 13.87 MiB  100.00% 8.94 MiB p/s 2

💿  Downloading VM boot image ...

    > minikube-v1.5.1.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s

    > minikube-v1.5.1.iso: 143.76 MiB / 143.76 MiB [-] 100.00% 17.68 MiB p/s 9s

🔥  Creating kvm2 VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...

 

dave@localhost ~]$ minikube start

😄  minikube v1.5.2 on Fedora 31

💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.

🏃  Using the running kvm2 "minikube" VM ...

⌛  Waiting for the host to be provisioned ...

🐳  Preparing Kubernetes v1.16.2 on Docker '18.09.9' ...

💾  Downloading kubeadm v1.16.2

💾  Downloading kubelet v1.16.2

🔄  Relaunching Kubernetes using kubeadm ... 

 

Need to work on network configuration

🔄  Relaunching Kubernetes using kubeadm ... 

 
 
 
 
 
💣  Error restarting cluster: waiting for apiserver: apiserver process never appeared

 



Application deployment into Google Kubernetes Engine on Google Cloud

$
0
0
Kubernetes https://kubernetes.io/

Google Kubernetes Engine  https://cloud.google.com/container-engine

Sample app https://github.com/kelseyhightower/app
 It's a 12-Factor application with the following Docker images:
  • Monolith: includes auth and hello services.
  • Auth microservice: generates JWT tokens for authenticated users.
  • Hello microservice: greets authenticated users.
  • nginx: frontend to the auth and hello services.
 

    Tools

    The gcloud command-line interface is a tool that provides the primary CLI to Google Cloud Platform.
    https://cloud.google.com/sdk/gcloud/

    Access the Kubernetes pods


    Pods are allocated a private IP address by default that cannot be reached outside of the cluster. Use the kubectl port-forward command to map a local port to a port inside the monolith pod.

    kubectl port-forward myapp  9999:80  
     
     
     
    TOKEN=$(curl http://127.0.0.1:9999/login -u user|jq -r '.token')
     
     
     
    curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:9999/secure  


    Run shell inside pod

    kubectl exec myapp --stdin --tty -c myapp /bin/sh


    This uploads cert files from the local directory tls/ and stores them in a secret called tls-certs.

    kubectl create secret generic tls-certs --from-file tls/

    kubectl create configmap nginx-proxy-conf --from-file nginx/proxy.conf


    more nginx/proxy.conf

    server {

      listen 443;

      ssl    on;

     
      ssl_certificate     /etc/tls/cert.pem;

      ssl_certificate_key /etc/tls/key.pem;

     
      location / {

        proxy_pass http://127.0.0.1:80;

      }

    }


    Setup firewall
    gcloud compute firewall-rules create allow-myapp-nodeport --allow=tcp:31000



    NAME                     NETWORK  DIRECTION  PRIORITY  ALLOW      DENY  DISABLED

    allow-
    myapp-nodeport  default  INGRESS    1000      tcp:31000        False


    Get pods with secure=enabled

    kubectl get pods -l "app=myapp,secure=enabled"


    Get endpoints

    kubectl get endpoints monolith



    A

    Continuous Deployment (CD) with Jenkins and Kubernetes on Google Cloud

    $
    0
    0

    Jenkins on Kubernetes Engine

    https://cloud.google.com/solutions/jenkins-on-kubernetes-engine

    https://cloud.google.com/solutions/jenkins-on-kubernetes-engine-tutorial

    Provision a Jenkins environment on a Kubernetes Engine Cluster, using the Helm Package Manager.
    Google Kubernetes Engine (GKE) is the hosted version of Kubernetes on Google Cloud Platform (GCP).

    Create k8s cluster

    gcloud container clusters create jenkins-cd \

      --num-nodes 2 \

      --machine-type n1-standard-2 \

      --cluster-version 1.13 \

      --service-account "jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com"


     Jenkins Kubernetes plugin
    https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+Plugin 

    Scale

    kubectl --namespace=production scale deployment gceme-frontend-production --replicas=4


    Port forward

    export DEV_POD_NAME=$(kubectl get pods -n new-feature -l "app=gceme,env=dev,role=frontend" -o jsonpath="{.items[0].metadata.name}")

    kubectl port-forward -n new-feature $DEV_POD_NAME 8001:80 >> /dev/null &




    export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)


    while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1; done






    Install microk8s on Fedora

    $
    0
    0
    HOWTO

    https://microk8s.io/#get-started


    Install snap

    https://snapcraft.io/docs/installing-snapd

    https://www.cyberciti.biz/faq/install-snapd-on-fedora-linux-system-dnf-command/

    [dave@localhost ~]$ sudo dnf install snapd

    Fedora Modular 31 - x86_64 - Updates                                                                         40 kB/s |  23 kB     00:00    

    Fedora 31 - x86_64 - Updates                                                                                 27 kB/s |  24 kB     00:00    

    Fedora 31 - x86_64 - Updates                                                                                416 kB/s | 421 kB     00:01    

    Dependencies resolved.

    ============================================================================================================================================

     Package                             Architecture                 Version                               Repository                     Size

    ============================================================================================================================================

    Installing:

     snapd                               x86_64                       2.42.2-1.fc31                         updates                        16 M

    Installing dependencies:

     snap-confine                        x86_64                       2.42.2-1.fc31                         updates                       2.4 M

     snapd-selinux                       noarch                       2.42.2-1.fc31                         updates                       220 k

     
    Transaction Summary

    ============================================================================================================================================

    Install  3 Packages

     
    Total download size: 18 M

    Installed size: 70 M

    Is this ok [y/N]: y

    Downloading Packages:

    (1/3): snapd-selinux-2.42.2-1.fc31.noarch.rpm                                                               912 kB/s | 220 kB     00:00    

    (2/3): snap-confine-2.42.2-1.fc31.x86_64.rpm                                                                2.9 MB/s | 2.4 MB     00:00    

    (3/3): snapd-2.42.2-1.fc31.x86_64.rpm                                                                       7.9 MB/s |  16 MB     00:02    

    --------------------------------------------------------------------------------------------------------------------------------------------

    Total                                                                                                       7.3 MB/s |  18 MB     00:02     

    Running transaction check

    Transaction check succeeded.

    Running transaction test

    Transaction test succeeded.

    Running transaction

      Preparing        :                                                                                                                    1/1 

      Running scriptlet: snapd-selinux-2.42.2-1.fc31.noarch                                                                                 1/3 

      Installing       : snapd-selinux-2.42.2-1.fc31.noarch                                                                                 1/3 

      Running scriptlet: snapd-selinux-2.42.2-1.fc31.noarch                                                                                 1/3 

      Installing       : snap-confine-2.42.2-1.fc31.x86_64                                                                                  2/3 

      Installing       : snapd-2.42.2-1.fc31.x86_64                                                                                         3/3 

      Running scriptlet: snapd-2.42.2-1.fc31.x86_64                                                                                         3/3 

    Created symlink /etc/systemd/system/sockets.target.wants/snapd.socket → /usr/lib/systemd/system/snapd.socket.

    Created symlink /etc/systemd/user/sockets.target.wants/snapd.session-agent.socket → /usr/lib/systemd/user/snapd.session-agent.socket.

     
      Running scriptlet: snapd-selinux-2.42.2-1.fc31.noarch                                                                                 3/3 

      Running scriptlet: snapd-2.42.2-1.fc31.x86_64                                                                                         3/3 

      Verifying        : snap-confine-2.42.2-1.fc31.x86_64                                                                                  1/3 

      Verifying        : snapd-2.42.2-1.fc31.x86_64                                                                                         2/3 

      Verifying        : snapd-selinux-2.42.2-1.fc31.noarch                                                                                 3/3 

     
    Installed:

      snap-confine-2.42.2-1.fc31.x86_64               snapd-2.42.2-1.fc31.x86_64               snapd-selinux-2.42.2-1.fc31.noarch              

     
    Complete!

    [dave@localhost ~]$ sudo ln -s /var/lib/snapd/snap /snap

    [dave@localhost ~]$ ls -l /snap

    lrwxrwxrwx. 1 root root 19 Feb  8 07:46 /snap -> /var/lib/snapd/snap

    [dave@localhost ~]$ snap version

    snap    2.42.2-1.fc31

    snapd   unavailable

    series  -

     


    Start snap service

    [dave@localhost ~]$ snap search vlc

    error: cannot list snaps: cannot communicate with server: Get http://localhost/v2/find?q=vlc&scope=wide: dial unix /run/snapd.socket: connect: no such file or directory

    [dave@localhost ~]$ sudo systemctl start snapd.service

    [dave@localhost ~]$ snap search vlc

    Name            Version                 Publisher  Notes  Summary

    vlc             3.0.8                   videolan✓  -      The ultimate media player

    mjpg-streamer   2.0                     ogra       -      UVC webcam streaming tool

    audio-recorder  3.0.5+rev1432+pkg-7b07  brlin      -      A free audio-recorder for Linux (EXTREMELY BUGGY)

    dav1d           0.5.1-20-g52c7427       videolan✓  -      AV1 decoder from VideoLAN

    peerflix        v0.39.0+git1.df28e20    pmagill    -      Streaming torrent client for Node.js

    [dave@localhost ~]$ snap search microk8s

    Name      Version  Publisher   Notes    Summary

    microk8s  v1.17.2  canonical✓  classic  Kubernetes for workstations and appliances

     


    Install microk8s


    [dave@localhost ~]$ sudo snap install microk8s --classic

    2020-02-08T07:51:58+01:00 INFO Waiting for restart...

    Warning: /var/lib/snapd/snap/bin was not found in your $PATH. If you've not restarted your session

             since you installed snapd, try doing that. Please see https://forum.snapcraft.io/t/9469

             for more details.

     
    microk8s v1.17.2 from Canonical✓ installed

     


    List services

    [dave@localhost ~]$ snap list

    Name      Version    Rev   Tracking  Publisher   Notes

    core      16-2.42.5  8268  stable    canonical✓  core

    microk8s  v1.17.2    1173  stable    canonical✓  classic

    [dave@localhost ~]$ snap services 

    Service                             Startup  Current  Notes

    microk8s.daemon-apiserver           enabled  active   -

    microk8s.daemon-apiserver-kicker    enabled  active   -

    microk8s.daemon-cluster-agent       enabled  active   -

    microk8s.daemon-containerd          enabled  active   -

    microk8s.daemon-controller-manager  enabled  active   -

    microk8s.daemon-etcd                enabled  active   -

    microk8s.daemon-flanneld            enabled  active   -

    microk8s.daemon-kubelet             enabled  active   -

    microk8s.daemon-proxy               enabled  active   -

    microk8s.daemon-scheduler           enabled  active   -

     


    Add /snap/bin into secure path into sudo

    [root@localhost ~]# visudo

    [root@localhost ~]# grep snap  /etc/sudoers

    Defaults    secure_path = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin

     


    [dave@localhost ~]$ sudo microk8s.status --wait-ready

    [sudo] password for dave: 

    microk8s is running

    addons:

    cilium: disabled

    dashboard: disabled

    dns: disabled

    fluentd: disabled

    gpu: disabled

    helm3: disabled

    helm: disabled

    ingress: disabled

    istio: disabled

    jaeger: disabled

    juju: disabled

    knative: disabled

    kubeflow: disabled

    linkerd: disabled

    metallb: disabled

    metrics-server: disabled

    prometheus: disabled

    rbac: disabled

    registry: disabled

    storage: disabled

     


    Turn on standard services


    [dave@localhost ~]$ sudo microk8s.enable dns dashboard registry

    Enabling DNS

    Applying manifest

    serviceaccount/coredns created

    configmap/coredns created

    deployment.apps/coredns created

    service/kube-dns created

    clusterrole.rbac.authorization.k8s.io/coredns created

    clusterrolebinding.rbac.authorization.k8s.io/coredns created

    Restarting kubelet

    DNS is enabled

    Applying manifest

    serviceaccount/kubernetes-dashboard created

    service/kubernetes-dashboard created

    secret/kubernetes-dashboard-certs created

    secret/kubernetes-dashboard-csrf created

    secret/kubernetes-dashboard-key-holder created

    configmap/kubernetes-dashboard-settings created

    role.rbac.authorization.k8s.io/kubernetes-dashboard created

    clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created

    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

    clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

    deployment.apps/kubernetes-dashboard created

    service/dashboard-metrics-scraper created

    deployment.apps/dashboard-metrics-scraper created

    service/monitoring-grafana created

    service/monitoring-influxdb created

    service/heapster created

    deployment.apps/monitoring-influxdb-grafana-v4 created

    serviceaccount/heapster created

    clusterrolebinding.rbac.authorization.k8s.io/heapster created

    configmap/heapster-config created

    configmap/eventer-config created

    deployment.apps/heapster-v1.5.2 created

     
    If RBAC is not enabled access the dashboard using the default token retrieved with:

     
    token=$(microk8s.kubectl -n kube-system get secret | grep default-token | cut -d "" -f1)

    microk8s.kubectl -n kube-system describe secret $token

     
    In an RBAC enabled setup (microk8s.enable RBAC) you need to create a user with restricted

    permissions as shown in:

    https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

     
    Enabling the private registry

    Enabling default storage class

    deployment.apps/hostpath-provisioner created

    storageclass.storage.k8s.io/microk8s-hostpath created

    serviceaccount/microk8s-hostpath created

    clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created

    clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created

    Storage will be available soon

    Applying registry manifest

    namespace/container-registry created

    persistentvolumeclaim/registry-claim created

    deployment.apps/registry created

    service/registry created

    The registry is enabled

     


    List services

    [dave@localhost ~]$ microk8s.enable --help

    Usage: microk8s.enable ADDON...

    Enable one or more ADDON included with microk8s

    Example: microk8s.enable dns storage

     
    Available addons:

     
      cilium

      dashboard

      dns

      fluentd

      gpu

      helm

      helm3

      ingress

      istio

      jaeger

      juju

      knative

      kubeflow

      linkerd

      metallb

      metrics-server

      prometheus

      rbac

      registry

      storage

     

    Add user into microk8s group

    [dave@localhost ~]$ microk8s.status --wait-ready

    Insufficient permissions to access MicroK8s.

    You can either try again with sudo or add the user dave to the 'microk8s' group:

     
        sudo usermod -a -G microk8s dave

     
    The new group will be available on the user's next login.

     

    MicroK8s - deploy sample app

    $
    0
    0

    MicroK8s quick start guide 



    Get nodes

    [dave@localhost ~]$ microk8s.kubectl get nodes

    NAME                    STATUS   ROLES    AGE   VERSION

    localhost.localdomain   Ready    <none>   24h   v1.17.2

     


    Get services

    [dave@localhost ~]$ microk8s.kubectl get services

    NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE

    kubernetes   ClusterIP   10.152.183.1   <none>        443/TCP   24h

     


    [dave@localhost ~]$ microk8s.kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1

    deployment.apps/kubernetes-bootcamp created

     

    [dave@localhost ~]$ microk8s.kubectl get pods

    NAME                                   READY   STATUS    RESTARTS   AGE

    kubernetes-bootcamp-69fbc6f4cf-f2vvk   1/1     Running   0          41s

     

    Get deployments

    [dave@localhost ~]$ kubectl get deployments

    NAME                  READY   UP-TO-DATE   AVAILABLE   AGE

    kubernetes-bootcamp   1/1     1            1           7m12s

     


    Get pod name

    [dave@localhost ~]$ export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')

    [dave@localhost ~]$ echo Name of the Pod: $POD_NAME

    Name of the Pod: kubernetes-bootcamp-69fbc6f4cf-f2vvk

     


    Describe pods
    [dave@localhost ~]$ kubectl describe pods

    Name:         kubernetes-bootcamp-69fbc6f4cf-f2vvk

    Namespace:    default

    Priority:     0

    Node:         localhost.localdomain/192.168.0.116

    Start Time:   Sun, 09 Feb 2020 08:53:15 +0100

    Labels:       app=kubernetes-bootcamp

                  pod-template-hash=69fbc6f4cf

    Annotations:  <none>

    Status:       Running

    IP:           10.1.30.34

    IPs:

      IP:           10.1.30.34

    Controlled By:  ReplicaSet/kubernetes-bootcamp-69fbc6f4cf

    Containers:

      kubernetes-bootcamp:

        Container ID:   containerd://e1c171143d299f88fb686f83e2fa2aae1cbe59e14cff53f4f332c2ccc2fb3f2e

        Image:          gcr.io/google-samples/kubernetes-bootcamp:v1

        Image ID:       gcr.io/google-samples/kubernetes-bootcamp@sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af

        Port:           <none>

        Host Port:      <none>

        State:          Running

          Started:      Sun, 09 Feb 2020 08:53:26 +0100

        Ready:          True

        Restart Count:  0

        Environment:    <none>

        Mounts:

          /var/run/secrets/kubernetes.io/serviceaccount from default-token-msp2q (ro)

    Conditions:

      Type              Status

      Initialized       True 

      Ready             True 

      ContainersReady   True 

      PodScheduled      True 

    Volumes:

      default-token-msp2q:

        Type:        Secret (a volume populated by a Secret)

        SecretName:  default-token-msp2q

        Optional:    false

    QoS Class:       BestEffort

    Node-Selectors:  <none>

    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                     node.kubernetes.io/unreachable:NoExecute for 300s

    Events:

      Type    Reason     Age   From                            Message

      ----    ------     ----  ----                            -------

      Normal  Scheduled  11m   default-scheduler               Successfully assigned default/kubernetes-bootcamp-69fbc6f4cf-f2vvk to localhost.localdomain

      Normal  Pulling    11m   kubelet, localhost.localdomain  Pulling image "gcr.io/google-samples/kubernetes-bootcamp:v1"

      Normal  Pulled     11m   kubelet, localhost.localdomain  Successfully pulled image "gcr.io/google-samples/kubernetes-bootcamp:v1"

      Normal  Created    11m   kubelet, localhost.localdomain  Created container kubernetes-bootcamp

      Normal  Started    11m   kubelet, localhost.localdomain  Started container kubernetes-bootcamp

     


    Explore introduction https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/

    Get logs from pod

    [dave@localhost ~]$ kubectl logs $POD_NAME

    Kubernetes Bootcamp App Started At: 2020-02-09T07:53:26.300Z | Running On:  kubernetes-bootcamp-69fbc6f4cf-f2vvk 

     
     


    Exec commands on pod

    [dave@localhost ~]$ kubectl exec $POD_NAME env

    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

    HOSTNAME=kubernetes-bootcamp-69fbc6f4cf-f2vvk

    NPM_CONFIG_LOGLEVEL=info

    NODE_VERSION=6.3.1

    KUBERNETES_PORT=tcp://10.152.183.1:443

    KUBERNETES_PORT_443_TCP=tcp://10.152.183.1:443

    KUBERNETES_PORT_443_TCP_PROTO=tcp

    KUBERNETES_PORT_443_TCP_PORT=443

    KUBERNETES_PORT_443_TCP_ADDR=10.152.183.1

    KUBERNETES_SERVICE_HOST=10.152.183.1

    KUBERNETES_SERVICE_PORT=443

    KUBERNETES_SERVICE_PORT_HTTPS=443

    HOME=/root

     


    Run bash in pod
    [dave@localhost ~]$ kubectl exec -ti $POD_NAME bash

    root@kubernetes-bootcamp-69fbc6f4cf-f2vvk:/# cat server.js

    var http = require('http');

    var requests=0;

    var podname= process.env.HOSTNAME;

    var startTime;

    var host;

    var handleRequest = function(request, response) {

      response.setHeader('Content-Type', 'text/plain');

      response.writeHead(200);

      response.write("Hello Kubernetes bootcamp! | Running on: ");

      response.write(host);

      response.end(" | v=1\n");

      console.log("Running On:" ,host, "| Total Requests:", ++requests,"| App Uptime:", (new Date() - startTime)/1000 , "seconds", "| Log Time:",new Date());

    }

    var www = http.createServer(handleRequest);

    www.listen(8080,function () {

        startTime = new Date();;

        host = process.env.HOSTNAME;

        console.log ("Kubernetes Bootcamp App Started At:",startTime, "| Running On: " ,host, "\n" );

    });

    root@kubernetes-bootcamp-69fbc6f4cf-f2vvk:/# curl localhost:8080

    Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-69fbc6f4cf-f2vvk | v=1

     





    Install Python3 on Centos 7

    $
    0
    0
    To avoid impact on Centos own Python 2 packages use Python3  venv




    On Fedora 31 Python 3 is default

     
    [dave@localhost ~]$ python3

    Python 3.7.6 (default, Jan 30 2020, 09:44:41) 

    [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] on linux

    Type "help", "copyright", "credits" or "license" for more information.

    >>> 

    [dave@localhost ~]$ python2

    Python 2.7.17 (default, Oct 20 2019, 00:00:00) 

    [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] on linux2

    Type "help", "copyright", "credits" or "license" for more information.

    >>> 

    [dave@localhost ~]$ python

    Python 3.7.6 (default, Jan 30 2020, 09:44:41) 

    [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] on linux

    Type "help", "copyright", "credits" or "license" for more information.

    >>> 

     

    Fedora - convert DVD

    $
    0
    0

     RPM Fusion


    These tools require non-free resource - enable RPM Fusion

    https://rpmfusion.org/Configuration/ 

     Install non-free packages for DVD


    sudo dnf install rpmfusion-free-release-tainted

    sudo dnf install libdvdcss
     Install various packages for sound and video
     

    sudo dnf groupupdate multimedia --setop="install_weak_deps=False" --exclude=PackageKit-gstreamer-plugin

    sudo dnf groupupdate sound-and-video

      Whole DVD backup


      sudo dnf install dvdbackup 

      ============================================================================================================================================================
       Package                              Architecture                      Version                                     Repository                         Size
      ============================================================================================================================================================
      Installing:
       dvdbackup                            x86_64                            0.4.2-13.fc31                               fedora                             62 k



      EXAMPLES

             dvdbackup -I

                    gathers information about the DVD.  /dev/dvd is the default device tried - you need to use -i if your device name is different.

       
             dvdbackup -M

                    backups the whole DVD.  This action creates a valid DVD-Video structure that can be burned to a DVD-/+R(W) with help of genisoimage.

       
             dvdbackup -F

                    backups the main feature of the DVD.  This action creates a valid DVD-Video structure of the feature title set.  Note that this  will  not

                    result in an image immediately watchable - you will need another program like dvdauthor to help construct the IFO files.

       
                    dvdbackup defaults to get the 16:9 version of the main feature if a 4:3 is also present on the DVD.  To get the 4:3 version use -a 0.

       
                    dvdbackup makes it best to make a intelligent guess what is the main feature of the DVD - in case it fails please send a bug report.

       

       

       Using VLC

      https://linuxconfig.org/how-to-rip-dvds-with-vlc#h3-how-to-rip-a-dvd-to-your-hard-drive-with-vlc

       Using Handbrake


      https://linuxconfig.org/how-to-convert-video-formats-on-linux


      sudo dnf install handbrake


      ================================================================================

       Package        Arch        Version           Repository                   Size

      ================================================================================

      Installing:

       HandBrake      x86_64      1.3.1-1.fc31      rpmfusion-free-updates      434 kPaste your text here.


      sudo dnf install handbrake-gui


      ================================================================================

       Package           Arch       Version          Repository                  Size

      ================================================================================

      Installing:

       HandBrake-gui     x86_64     1.3.1-1.fc31     rpmfusion-free-updates     3.5 M


      Subtitle edit
      https://github.com/SubtitleEdit/subtitleedit/releases

      Install Oracle WebLogic Server 14.1.1.0.0 on Fedora Linux

      $
      0
      0
      Download
      https://www.oracle.com/middleware/technologies/fusionmiddleware-downloads.html

      Documentation
      https://docs.oracle.com/en/middleware/standalone/weblogic-server/14.1.1.0/index.html

      Weblogic JDK 11 Certification

      Oracle WebLogic Server 14c (14.1.1.0.0) is certified for use with JDK 11.

      Supported configurations https://www.oracle.com/middleware/technologies/fusion-certification.html


      Oracle JDK 11 download https://www.oracle.com/java/technologies/javase-jdk11-downloads.html

      Install Oracle JDK 11 via rpm ( dnf fails on missing dependencies)

      [dave@dave Downloads]$ sudo rpm -ivh  jdk-11.0.6_linux-x64_bin.rpm

      warning: jdk-11.0.6_linux-x64_bin.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY

      Verifying...                          ################################# [100%]

      Preparing...                          ################################# [100%]

      Updating / installing...

         1:jdk-11.0.6-2000:11.0.6-ga        ################################# [100%]

       

      Change Java version for system

      sudo alternatives --config java

      There are 3 programs which provide 'java'.

      Selection Command
      -----------------------------------------------
      1 java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.fc31.x86_64/jre/bin/java)
      + 2 /usr/java/jdk1.8.0_231-amd64/jre/bin/java
      * 3 /usr/java/jdk-11.0.6/bin/java

      Enter to keep the current selection[+], or type selection number: 3


      Verify Java version

      $ java -version
      java version "11.0.6" 2020-01-14 LTS
      Java(TM) SE Runtime Environment 18.9 (build 11.0.6+8-LTS)
      Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.6+8-LTS, mixed mode)


      Install Weblogic 14.1


      $ java -jar fmw_14.1.1.0.0_wls.jar  
      Launcher log file is /tmp/OraInstall2020-04-09_10-35-56AM/launcher2020-04-09_10-35-56AM.log.
      Extracting the installer . . . . . Done
      Checking if CPU speed is above 300 MHz.   Actual 2754.936 MHz    Passed
      Checking monitor: must be configured to display at least 256 colors.   Actual 16777216    Passed
      Checking swap space: must be greater than 512 MB.   Actual 8015 MB    Passed
      Checking temp space: must be greater than 300 MB.   Actual 7027 MB    Passed
      Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2020-04-09_10-35-56AM
       


      Start  domain

      Starting WLS with line:

      /usr/java/jdk-11.0.6/bin/java -server   -Xms256m -Xmx512m -XX:CompileThreshold=8000 -cp /app/weblogic/wlserver/server/lib/weblogic-launcher.jar -Dlaunch.use.env.classpath=true -Dweblogic.Name=AdminServer -Djava.security.policy=/app/weblogic/wlserver/server/lib/weblogic.policy  -Djava.system.class.loader=com.oracle.classloader.weblogic.LaunchClassLoader  -javaagent:/app/weblogic/wlserver/server/lib/debugpatch-agent.jar -da -Dwls.home=/app/weblogic/wlserver/server -Dweblogic.home=/app/weblogic/wlserver/server      weblogic.Server

       

      AWS Lambda using Python

      $
      0
      0
      AWS HOWTO
      https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html


      AWS repository
      https://github.com/awsdocs/aws-lambda-developer-guide


      Check prerequisities

      [dave@dave ~]$ aws --version

      aws-cli/2.0.1 Python/3.7.3 Linux/5.5.16-200.fc31.x86_64 botocore/2.0.0dev5

      [dave@dave ~]$ python --version

      Python 3.7.6

       



      Python Lambda

      import json

       
      def lambda_handler(event, context):

          # TODO implement

          return {

              'statusCode': 200,

              'body': json.dumps('Hello from Lambda!')

          }



      Get lambda in AWS CLI

      [dave@dave aws-lambda-developer-guide]$  aws lambda get-function --function-name dave-test-function

      {

          "Configuration": {

              "FunctionName": "dave-test-function",

              "FunctionArn": "arn:aws:lambda:eu-central-1:45454545454:function:dave-test-function",

              "Runtime": "python3.8",

              "Role": "arn:aws:iam::45454545:role/dave-lambda-role-test",

              "Handler": "lambda_function.lambda_handler",

              "CodeSize": 299,

              "Description": "",

              "Timeout": 3,

              "MemorySize": 128,

              "LastModified": "2020-04-26T07:53:06.901+0000",

              "CodeSha256": "45454545454545",

              "Version": "$LATEST",

              "TracingConfig": {

                  "Mode": "PassThrough"

              },

              "RevisionId": "454545454545",

              "State": "Active",

              "LastUpdateStatus": "Successful"

          },

          "Code": {

              "RepositoryType": "S3",

              "Location": "https://awslambda-eu-cent-1-tasks.s3.eu-central-1.amazonaws.com/snapshots/454534345656/dave-test-function-456544562342423424"

          }

      }

       

      Access Weblogic log information through Elasticsearch and Kibana

      $
      0
      0

      Elastic

      Kibana


      WebLogic Logging Exporter 

       https://github.com/oracle/weblogic-logging-exporter


      Install WebLogic Logging Exporter 



      Copy JAR to domain
      cp ~/Downloads/weblogic-logging-exporter-1.0.0.jar /app/domains/base_domain/

      Start Weblogic domain
      [dave@dave ~]$ cd /app/domains/base_domain/
      [dave@dave base_domain]$ ./startWebLogic.sh

      Access Weblogic console via browser at
      http://localhost:7001/console/
      Add StarUp class In the Administration Console, navigate to "Environment" then "Startup and Shutdown classes" in the main menu. Add a new Startup class. You may choose any descriptive name and the class name must be weblogic.logging.exporter.Startup. Target the startup class to each server that you want to export logs from.
      base_domain]$ more config/config.xml

      <startup-class>
      <name>StartupClass-weblogic.logging.exporter</name>
      <target>AdminServer</target>
      <class-name>weblogic.logging.exporter.Startup</class-name>
      </startup-class>

      Download org.yaml:snakeyaml https://search.maven.org/artifact/org.yaml/snakeyaml/1.27/bundle
       cp ~/Downloads/snakeyaml-1.27.jar /app/domains/base_domain/

      Add CLASSPATH into bin/setDomainEnv.sh
      ls snakeyaml-1.27.jar 
      snakeyaml-1.27.jar
      [dave@dave base_domain]$ grep snake bin/setDomainEnv.sh
      export CLASSPATH=$DOMAIN_HOME/weblogic-logging-exporter-1.0.0.jar:snakeyaml-1.27.jar:$CLASSPATH

      Add WebLogic Logging Exporter config file
      vi config/WebLogicLoggingExporter.yaml

      more config/WebLogicLoggingExporter.yaml
      publishHost: localhost
      publishPort: 9200
      domainUID: base_domain
      weblogicLoggingExporterEnabled: true
      weblogicLoggingIndexName: base_domain_wls
      weblogicLoggingExporterSeverity: Notice
      weblogicLoggingExporterBulkSize: 1
      weblogicLoggingExporterFilters:
      - filterExpression: 'severity > Warning'

      Install Docker ElasticSearch https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
      docker pull docker.elastic.co/elasticsearch/elasticsearch:7.9.2
      docker pull docker.elastic.co/elasticsearch/elasticsearch:7.9.2


      ocker pull docker.elastic.co/elasticsearch/elasticsearch:7.9.2
      7.9.2: Pulling from elasticsearch/elasticsearch
      f1feca467797: Pull complete
      2b669da077a4: Pull complete
      e5b4c466fc6d: Pull complete
      3b660c013f1a: Pull complete
      0e7ad1133ad1: Pull complete
      b50d6e48f432: Pull complete
      bff3705905f9: Pull complete
      9509765886ad: Pull complete
      b7f06f509306: Pull complete
      Digest: sha256:2be3302537236874fdeca184c78a49aed17d5aca0f8fc3f6192a80e93e817cb4
      Status: Downloaded newer image for docker.elastic.co/elasticsearch/elasticsearch:7.9.2
      docker.elastic.co/elasticsearch/elasticsearch:7.9.2

      Run Docker Elastic


      docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.9.2


      Install Kibana https://www.elastic.co/guide/en/kibana/current/docker.html
      docker pull docker.elastic.co/kibana/kibana:7.9.2
      Start dev Kibana
      docker run -d                           --name kibana                         -p 5601:5601                          --link elasticsearch:elasticsearch    -e "ELASTICSEARCH_URL=http://elasticsearch:9200"   docker.elastic.co/kibana/kibana:7.9.2

      Docker containers
       docker ps
      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
      8ba52c43304f docker.elastic.co/kibana/kibana:7.9.2 "/usr/local/bin/dumb…" 4 seconds ago Up 3 seconds 0.0.0.0:5601->5601/tcp kibana
      9fdc569676e0 docker.elastic.co/elasticsearch/elasticsearch:7.9.2 "/tini -- /usr/local…" 6 minutes ago Up 6 minutes 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp elasticsearch

      Weblogic start log when Elastic cannot be reached
      ======================= Weblogic Logging Exporter Startup class called
      Reading configuration from file name: /app/domains/base_domain/config/WebLogicLoggingExporter.yaml
      <Sep 26, 2020 10:24:49,146 AM CEST> <Notice> <Log Management> <BEA-170027> <The server has successfully established a connection with the Domain level Diagnostic Service.>
      Config{weblogicLoggingIndexName='domain1-wls', publishHost='localhost', publishPort=9200, weblogicLoggingExporterSeverity='Notice', weblogicLoggingExporterBulkSize='1', enabled=true, weblogicLoggingExporterFilters=[FilterConfig{expression='null', servers=[]}], domainUID='domain1'}
      javax.ws.rs.ProcessingException: java.net.ConnectException: Tried all: '2' addresses, but could not connect over HTTP to server: 'localhost', port: '9200'
      failed reasons:
      [0] address:'localhost/127.0.0.1',port:'9200' : java.net.ConnectException: Connection refused
      [1] address:'localhost/0:0:0:0:0:0:0:1',port:'9200' : java.net.ConnectException: Connection refused

      at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:260)
      at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:254)
      at org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$0(JerseyInvocation.java:729)
      at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
      at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
      at org.glassfish.jersey.internal.Errors.process(Errors.java:205)
      at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:390)
      at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:728)
      at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:421)
      at org.glassfish.jersey.client.JerseyInvocation$Builder.put(JerseyInvocation.java:310)
      at weblogic.logging.exporter.LogExportHandler.executePutOrPostOnUrl(LogExportHandler.java:170)
      at weblogic.logging.exporter.LogExportHandler.createMappings(LogExportHandler.java:300)
      at weblogic.logging.exporter.LogExportHandler.<init>(LogExportHandler.java:56)
      at weblogic.logging.exporter.Startup.main(Startup.java:37)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:498)
      at weblogic.management.deploy.classdeployment.ClassDeploymentManager.invokeMain(ClassDeploymentManager.java:449)
      at weblogic.management.deploy.classdeployment.ClassDeploymentManager.invokeClass(ClassDeploymentManager.java:359)
      at weblogic.management.deploy.classdeployment.ClassDeploymentManager.access$100(ClassDeploymentManager.java:63)
      at weblogic.management.deploy.classdeployment.ClassDeploymentManager$1.run(ClassDeploymentManager.java:286)
      at weblogic.management.deploy.classdeployment.ClassDeploymentManager$1.run(ClassDeploymentManager.java:273)
      at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:344)
      at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
      at weblogic.management.deploy.classdeployment.ClassDeploymentManager.invokeClassDeployment(ClassDeploymentManager.java:272)
      at weblogic.management.deploy.classdeployment.ClassDeploymentManager.invokeClassDeployments(ClassDeploymentManager.java:253)
      at weblogic.management.deploy.classdeployment.ClassDeploymentManager.runStartupsAfterAppAdminState(ClassDeploymentManager.java:215)
      at weblogic.management.deploy.classdeployment.StartupClassPrelistenService.start(StartupClassPrelistenService.java:29)
      at weblogic.server.AbstractServerService.postConstruct(AbstractServerService.java:76)
      at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:498)
      at org.glassfish.hk2.utilities.reflection.ReflectionHelper.invoke(ReflectionHelper.java:1268)
      at org.jvnet.hk2.internal.ClazzCreator.postConstructMe(ClazzCreator.java:309)
      at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:351)
      at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:463)
      at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:281)
      at org.glassfish.hk2.runlevel.RunLevelContext.findOrCreate(RunLevelContext.java:65)
      at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2102)
      at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:93)
      at org.jvnet.hk2.internal.ServiceLocatorImpl.getService(ServiceLocatorImpl.java:678)
      at org.jvnet.hk2.internal.ThreeThirtyResolver.resolve(ThreeThirtyResolver.java:54)
      at org.jvnet.hk2.internal.ClazzCreator.resolve(ClazzCreator.java:188)
      at org.jvnet.hk2.internal.ClazzCreator.resolveAllDependencies(ClazzCreator.java:211)
      at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:334)
      at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:463)
      at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:281)
      at org.glassfish.hk2.runlevel.RunLevelContext.findOrCreate(RunLevelContext.java:65)
      at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2102)
      at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:93)
      at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:67)
      at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.oneJob(CurrentTaskFuture.java:1213)
      at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.run(CurrentTaskFuture.java:1144)
      at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:677)
      at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
      at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
      at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
      at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
      at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:651)
      at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420)
      at weblogic.work.ExecuteThread.run(ExecuteThread.java:360)
      Caused by: java.net.ConnectException: Tried all: '2' addresses, but could not connect over HTTP to server: 'localhost', port: '9200'
      failed reasons:
      [0] address:'localhost/127.0.0.1',port:'9200' : java.net.ConnectException: Connection refused
      [1] address:'localhost/0:0:0:0:0:0:0:1',port:'9200' : java.net.ConnectException: Connection refused

      at weblogic.net.http.HttpClient.openServer(HttpClient.java:408)
      at weblogic.net.http.HttpClient.openServer(HttpClient.java:511)
      at weblogic.net.http.HttpClient.New(HttpClient.java:313)
      at weblogic.net.http.HttpClient.New(HttpClient.java:292)
      at weblogic.net.http.HttpURLConnection.connect(HttpURLConnection.java:295)
      at weblogic.net.http.HttpURLConnection.getInputStream(HttpURLConnection.java:685)
      at weblogic.net.http.SOAPHttpURLConnection.getInputStream(SOAPHttpURLConnection.java:42)
      at weblogic.net.http.HttpURLConnection.getResponseCode(HttpURLConnection.java:1546)
      at org.glassfish.jersey.client.internal.HttpUrlConnector._apply(HttpUrlConnector.java:366)
      at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:258)

      Weblogic good start with Elastic running in Docker
      ======================= Weblogic Logging Exporter Startup class called
      Reading configuration from file name: /app/domains/base_domain/config/WebLogicLoggingExporter.yaml
      <Sep 26, 2020 10:35:23,780 AM CEST> <Notice> <Log Management> <BEA-170027> <The server has successfully established a connection with the Domain level Diagnostic Service.>
      Config{weblogicLoggingIndexName='domain1-wls', publishHost='localhost', publishPort=9200, weblogicLoggingExporterSeverity='Notice', weblogicLoggingExporterBulkSize='1', enabled=true, weblogicLoggingExporterFilters=[FilterConfig{expression='null', servers=[]}], domainUID='domain1'}

      dave@dave base_domain]$ curl "localhost:9200/_cat/indices?v"
      health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
      yellow open domain1-wls SQqMMWxgTCi6nhWyDSmeyg 1 1 25 0 39.6kb 39.6kb


      Verify that logs were posted


      Running Elasticsearch and Kibana locally for testing



      Kibana



      Access Kibana at http://localhost:5601/ 


       Getting
      Kibana server is not ready yet



      Used newest version - but there are some issues
      Caused by: org.elasticsearch.index.mapper.MapperParsingException: Root mapping definition has unsupported parameters:  [doc : {properties={severity={type=keyword}, sequenceNumber={type=keyword}, subSystem={type=keyword}, level={type=keyword}, serverName={type=keyword}, messageID={type=keyword}, domainUID={type=keyword}, userId={type=keyword}, threadName={type=keyword}, machineName={type=keyword}, transactionId={type=keyword}, loggerName={type=keyword}, timestamp={type=date}}}]",
      "at org.elasticsearch.index.mapper.DocumentMapperParser.checkNoRemainingFields(DocumentMapperParser.java:148) ~[elasticsearch-7.9.2.jar:7.9.2]",
      "at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:136) ~[elasticsearch-7.9.2.jar:7.9.2]",
      "at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:88) ~[elasticsearch-7.9.2.jar:7.9.2]",
      "at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:421) ~[elasticsearch-7.9.2.jar:7.9.2]",
      "... 23 more"] }

      Older version 6.2.2
      docker run -d                       --name elasticsearch              -p 9200:9200                      -p 9300:9300                      -e "discovery.type=single-node"   docker.elastic.co/elasticsearch/elasticsearch:6.2.2
      d8308574a82fc0f36e43c71846db51c7e26e567c53927c207730cde61b2512f5




      docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.2

      docker run -d --name kibana -p 5601:5601 --link elasticsearch:elasticsearch -e "ELASTICSEARCH_URL=http://elasticsearch:9200" docker.elastic.co/kibana/kibana:6.2.2
      ba259c96db5564c02e80116820bb78db65fd2c8919e33449d3c0ed3a6da4fb5a

      Running
       docker ps
      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
      ba259c96db55 docker.elastic.co/kibana/kibana:6.2.2 "/bin/bash /usr/loca…" 34 seconds ago Up 34 seconds 0.0.0.0:5601->5601/tcp kibana
      d8308574a82f docker.elastic.co/elasticsearch/elasticsearch:6.2.2 "/usr/local/bin/dock…" About a minute ago Up About a minute 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp elasticsearch

      Weblogic log
      ======================= Weblogic Logging Exporter Startup class called
      Reading configuration from file name: /app/domains/base_domain/config/WebLogicLoggingExporter.yaml
      <Sep 26, 2020 10:53:58,858 AM CEST> <Notice> <Log Management> <BEA-170027> <The server has successfully established a connection with the Domain level Diagnostic Service.>
      Config{weblogicLoggingIndexName='domain1-wls', publishHost='localhost', publishPort=9200, weblogicLoggingExporterSeverity='Notice', weblogicLoggingExporterBulkSize='1', enabled=true, weblogicLoggingExporterFilters=[FilterConfig{expression='null', servers=[]}], domainUID='domain1'}

      Elasticsearch Kibana errors
      Caused by: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
      at org.elasticsearch.cluster.block.ClusterBlocks.indexBlockedException(ClusterBlocks.java:182) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.handleBlockExceptions(TransportReplicationAction.java:813) ~[elasticsearch-6.2.2.jar:6.2.2]

      [2020-09-26T09:03:43,980][WARN ][o.e.x.m.e.l.LocalExporter] unexpected error while indexing monitoring document
      org.elasticsearch.xpack.monitoring.exporter.ExportException: ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]
      at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:140) ~[?:?]
      at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_161]
      at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) ~[?:1.8.0_161]
      at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_161]
      at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_161]
      at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_161]
      at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[?:1.8.0_161]
      at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) ~[?:1.8.0_161]
      at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_161]
      at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[?:1.8.0_161]
      at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:141) ~[?:?]
      at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:123) ~[?:?]
      at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:85) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:81) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:571) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:380) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:375) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:91) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:909) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.handleBlockException(TransportReplicationAction.java:827) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.handleBlockExceptions(TransportReplicationAction.java:815) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:713) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:170) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:98) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:133) ~[?:?]
      at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:165) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.doRun(TransportBulkAction.java:350) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(TransportBulkAction.java:462) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:175) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.bulk.TransportBulkAction.lambda$processBulkIndexIngestRequest$4(TransportBulkAction.java:514) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.ingest.PipelineExecutionService$2.doRun(PipelineExecutionService.java:103) [elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.2.jar:6.2.2]
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
      at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
      Caused by: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
      at org.elasticsearch.cluster.block.ClusterBlocks.indexBlockedException(ClusterBlocks.java:182) ~[elasticsearch-6.2.2.jar:6.2.2]
      at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.handleBlockExceptions(TransportReplicationAction.java:813) ~[elasticsearch-6.2.2.jar:6.2.2]

      docker images
      REPOSITORY TAG IMAGE ID CREATED SIZE
      <none> <none> 233dc2c81588 14 hours ago 423MB
      <none> <none> 0aa94d9613a1 14 hours ago 615MB
      oracle/serverjre 8 9e58399899c9 14 hours ago 359MB
      docker.elastic.co/kibana/kibana 7.9.2 ba296c26886a 3 days ago 1.18GB
      docker.elastic.co/elasticsearch/elasticsearch 7.9.2 caa7a21ca06e 3 days ago 763MB
      alpine latest a24bb4013296 3 months ago 5.57MB
      oracle/weblogic 12.2.1.4-slim 9f08a7ac6c5a 11 months ago 817MB
      <none> <none> fdf24d1eabc2 11 months ago 1.2GB
      12214-weblogic-domain-generic latest 93dc22395c42 11 months ago 1.26GB
      oracle/weblogic 12.2.1.4-generic b67e91a22473 11 months ago 1.26GB
      <none> <none> ba227fc7cf16 11 months ago 2.99GB
      oraclelinux 7-slim 874477adb545 13 months ago 118MB
      hello-world latest fce289e99eb9 21 months ago 1.84kB
      docker.elastic.co/kibana/kibana 6.2.2 87fb54648d86 2 years ago 985MB
      docker.elastic.co/elasticsearch/elasticsearch 6.2.2 576825ff4f5a 2 years ago 574MB

      Create Weblogic 12.2.1.4 Docker image with Oracle JDK 8

      $
      0
      0

       Clone Oracle Docker image repo

      https://github.com/oracle/docker-images/tree/master/OracleWebLogic

       

      Downloads

      Build Oracle JDK 8 Docker image

      Build Weblogic Server image

      ./buildDockerImage.sh -hUsage: buildDockerImage.sh -v [version] [-d | -g | -m ] [-j] [-s] [-c]
      Builds a Docker Image for Oracle WebLogic.Parameters:
      -v: version to build. Required.
      Choose one of: 12.1.3 12.2.1.3 12.2.1.4 14.1.1.0
      -d: creates image based on 'developer' distribution
      -g: creates image based on 'generic' distribution
      -j: choose '8' to create a 14.1.1.0 image with JDK 8 or '11' to create a 14.1.1.0 image with JDK 11.
      -m: creates image based on 'slim' distribution
      -c: enables Docker image layer cache during build
      -s: skips the MD5 check of packages* select one distribution only: -d, -g, or -mLICENSE UPL 1.0Copyright (c) 2014, 2020, Oracle and/or its affiliates.

       

      ./buildDockerImage.sh -v 12.2.1.4 -g
      Set- WebLogic's Version 12.2.1.4
      Set- Distribution:Generic
      Version= 12.2.1.4 Distribution= generic
      Checking if required packages are present and valid...
      fmw_12.2.1.4.0_wls_lite_Disk1_1of1.zip: OK
      md5sum: WARNING: 1 line is improperly formatted
      =====================
      Proxy settings were found and will be used during build.
      Building image 'oracle/weblogic:12.2.1.4-generic' ...
      Building image using Dockerfile.'generic'
      Sending build context to Docker daemon 607.2MB
      Step 1/19 : FROM oracle/serverjre:8 as builder
      ---> b12cce0cdc47
      Step 2/19 : LABEL "provider"="Oracle""maintainer"="Monica Riccelli <monica.riccelli@oracle.com>""issues"="https://github.com/oracle/docker-images/issues""port.admin.listen"="7001""port.administration"="9002"
      ---> Running in f8d99a88a5ad
      Removing intermediate container f8d99a88a5ad
      ---> d736736d476b
      Step 3/19 : ENV ORACLE_HOME=/u01/oracle USER_MEM_ARGS="-Djava.security.egd=file:/dev/./urandom" PATH=$PATH:${JAVA_HOME}/bin:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin
      ---> Running in 3f2ececaf11a
      Removing intermediate container 3f2ececaf11a
      ---> 146b8fcd883b
      Step 4/19 : RUN mkdir /u01 && useradd -b /u01 -d /u01/oracle -m -s /bin/bash oracle && chown oracle:root -R /u01 && chmod -R 775 /u01
      ---> Running in b0d432006afd
      Removing intermediate container b0d432006afd
      ---> cf3b48c68fd8
      Step 5/19 : ENV FMW_PKG=fmw_12.2.1.4.0_wls_lite_Disk1_1of1.zip FMW_JAR=fmw_12.2.1.4.0_wls_lite_generic.jar
      ---> Running in cef80193cca1
      Removing intermediate container cef80193cca1
      ---> 61d8ae7b497c
      Step 6/19 : COPY --chown=oracle:root $FMW_PKG install.file oraInst.loc /u01/
      ---> 54d99177fc67
      Step 7/19 : USER oracle
      ---> Running in 93ef6bacad54
      Removing intermediate container 93ef6bacad54
      ---> 97b07f62eaf0
      Step 8/19 : RUN cd /u01 && ${JAVA_HOME}/bin/jar xf /u01/$FMW_PKG && cd - && ${JAVA_HOME}/bin/java -jar /u01/$FMW_JAR -silent -responseFile /u01/install.file -invPtrLoc /u01/oraInst.loc -jreLoc $JAVA_HOME -ignoreSysPrereqs -force -novalidation ORACLE_HOME=$ORACLE_HOME INSTALL_TYPE="WebLogic Server"&& rm /u01/$FMW_JAR /u01/$FMW_PKG /u01/install.file && rm -rf /u01/oracle/cfgtoollogs
      ---> Running in 2b8c2e927247
      /
      Launcher log file is /tmp/OraInstall2020-10-06_04-32-43PM/launcher2020-10-06_04-32-43PM.log.
      Extracting the installer . . . . . . Done
      Checking if CPU speed is above 300 MHz. Actual 2299.892 MHz Passed
      Checking swap space: must be greater than 512 MB. Actual 0 MB Failed <<<<
      Checking if this platform requires a 64-bit JVM. Actual 64 Passed (64-bit not required)
      Checking temp space: must be greater than 300 MB. Actual 35308 MB Passed>>> Ignoring failure(s) of required prerequisite checks and continuing.
      Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2020-10-06_04-32-43PM
      Log: /tmp/OraInstall2020-10-06_04-32-43PM/install2020-10-06_04-32-43PM.log
      Setting ORACLE_HOME...
      Setting INSTALL_TYPE...
      Copyright (c) 1996, 2019, Oracle and/or its affiliates. All rights reserved.
      Reading response file..
      Skipping Software Updates
      Validations are disabled for this session.
      Verifying data
      Copying Files
      Percent Complete : 10
      Percent Complete : 20
      Percent Complete : 30
      Percent Complete : 40
      Percent Complete : 50
      Percent Complete : 60
      Percent Complete : 70
      Percent Complete : 80
      Percent Complete : 90
      Percent Complete : 100The installation of Oracle Fusion Middleware 12c WebLogic Server and Coherence 12.2.1.4.0 completed successfully.
      Logs successfully copied to /u01/oracle/.inventory/logs.
      Removing intermediate container 2b8c2e927247
      ---> d7a4e153f899
      Step 9/19 : FROM oracle/serverjre:8
      ---> b12cce0cdc47
      Step 10/19 : ENV ORACLE_HOME=/u01/oracle USER_MEM_ARGS="-Djava.security.egd=file:/dev/./urandom" SCRIPT_FILE=/u01/oracle/createAndStartEmptyDomain.sh HEALTH_SCRIPT_FILE=/u01/oracle/get_healthcheck_url.sh PATH=$PATH:${JAVA_HOME}/bin:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin
      ---> Running in 431e04896ee9
      Removing intermediate container 431e04896ee9
      ---> eb2f7f4b28c6
      Step 11/19 : ENV DOMAIN_NAME="${DOMAIN_NAME:-base_domain}" ADMIN_LISTEN_PORT="${ADMIN_LISTEN_PORT:-7001}" ADMIN_NAME="${ADMIN_NAME:-AdminServer}" ADMINISTRATION_PORT_ENABLED="${ADMINISTRATION_PORT_ENABLED:-true}" ADMINISTRATION_PORT="${ADMINISTRATION_PORT:-9002}"
      ---> Running in 63039f801c73
      Removing intermediate container 63039f801c73
      ---> 81f866830975
      Step 12/19 : RUN mkdir -p /u01 && chmod 775 /u01 && useradd -b /u01 -d /u01/oracle -m -s /bin/bash oracle && chown oracle:root /u01
      ---> Running in 0d343b7ba4a1
      Removing intermediate container 0d343b7ba4a1
      ---> 313342ac76f6
      Step 13/19 : COPY --from=builder --chown=oracle:root /u01 /u01
      ---> 78f69917697e
      Step 14/19 : COPY container-scripts/createAndStartEmptyDomain.sh container-scripts/create-wls-domain.py container-scripts/get_healthcheck_url.sh /u01/oracle/
      ---> 50da8dbcf889
      Step 15/19 : RUN chmod +xr $SCRIPT_FILE $HEALTH_SCRIPT_FILE && chown oracle:root $SCRIPT_FILE /u01/oracle/create-wls-domain.py $HEALTH_SCRIPT_FILE
      ---> Running in d99c49d10e81
      Removing intermediate container d99c49d10e81
      ---> d92ddd8d7375
      Step 16/19 : USER oracle
      ---> Running in 453e1dfee2ef
      Removing intermediate container 453e1dfee2ef
      ---> 1d89d8a52ac8
      Step 17/19 : HEALTHCHECK --start-period=10s --timeout=30s --retries=3 CMD curl -k -s --fail `$HEALTH_SCRIPT_FILE` || exit 1
      ---> Running in 57809c715f24
      Removing intermediate container 57809c715f24
      ---> 209494bd4b6c
      Step 18/19 : WORKDIR ${ORACLE_HOME}
      ---> Running in a82dc96b8ddc
      Removing intermediate container a82dc96b8ddc
      ---> 8ab075250e2f
      Step 19/19 : CMD ["/u01/oracle/createAndStartEmptyDomain.sh"]
      ---> Running in 3a8437f5ed11
      Removing intermediate container 3a8437f5ed11
      ---> 9dd72f2c0bc0
      Successfully built 9dd72f2c0bc0
      Successfully tagged oracle/weblogic:12.2.1.4-generic WebLogic Docker Image for 'generic' version 12.2.1.4 is ready to be extended: --> oracle/weblogic:12.2.1.4-generic Build completed in 121 seconds.

       

      Build domain 


      Build domain
      [centos@dave 12213-domain]$ more ./build.sh
      #! /bin/bash
      #
      #Copyright (c) 2019 Oracle and/or its affiliates. All rights reserved.
      #
      #Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
      #
      #Build WebLogic Domain image persisted to volumedocker build -f Dockerfile -t 12213-weblogic-domain-in-volume .
      [centos@dave 12213-domain]$ vi Dockerfile
      [centos@dave 12213-domain]$ ./build.sh
      Sending build context to Docker daemon 37.89kB
      Step 1/11 : FROM oracle/weblogic:12.2.1.4-generic
      ---> 4ce73a139125
      Step 2/11 : LABEL "provider"="Oracle""maintainer"="Monica Riccelli <monica.riccelli@oracle.com>""issues"="https://github.com/oracle/docker-images/issues""port.admin.listen"="7001""port.administration"="9002""port.managed.server"="8001"
      ---> Running in c835f39122df
      Removing intermediate container c835f39122df
      ---> b9c89cbdf500
      Step 3/11 : ENV DOMAIN_ROOT="/u01/oracle/user_projects/domains" ADMIN_HOST="${ADMIN_HOST:-AdminContainer}" MANAGED_SERVER_PORT="${MANAGED_SERVER_PORT:-8001}" MANAGED_SERVER_NAME_BASE="${MANAGED_SERVER_NAME_BASE:-MS}" MANAGED_SERVER_CONTAINER="${MANAGED_SERVER_CONTAINER:-false}" CONFIGURED_MANAGED_SERVER_COUNT="${CONFIGURED_MANAGED_SERVER_COUNT:-2}" MANAGED_NAME="${MANAGED_NAME:-MS1}" CLUSTER_NAME="${CLUSTER_NAME:-cluster1}" CLUSTER_TYPE="${CLUSTER_TYPE:-DYNAMIC}" PROPERTIES_FILE_DIR="/u01/oracle/properties" JAVA_OPTIONS="-Doracle.jdbc.fanEnabled=false -Dweblogic.StdoutDebugEnabled=false" PATH="$PATH:${JAVA_HOME}/bin:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle/container-scripts"
      ---> Running in 84b5fc5ce774
      Removing intermediate container 84b5fc5ce774
      ---> 522568b73d0f
      Step 4/11 : COPY --chown=oracle:oracle container-scripts/* /u01/oracle/container-scripts/
      ---> 46366a86c0a3
      Step 5/11 : COPY --chown=oracle:oracle container-scripts/get_healthcheck_url.sh /u01/oracle/get_healthcheck_url.sh
      ---> de84286612b9
      Step 6/11 : USER root
      ---> Running in 378af82336af
      Removing intermediate container 378af82336af
      ---> 1e46b9768728
      Step 7/11 : RUN mkdir -p $DOMAIN_ROOT && chown -R oracle:oracle $DOMAIN_ROOT/.. && chmod -R a+xwr $DOMAIN_ROOT/.. && mkdir -p $ORACLE_HOME/properties && chmod -R a+r $ORACLE_HOME/properties && chmod +x /u01/oracle/container-scripts/*
      ---> Running in 9eaa3b52ebc4
      Removing intermediate container 9eaa3b52ebc4
      ---> 1547b3da749e
      Step 8/11 : VOLUME $DOMAIN_ROOT
      ---> Running in 23f2f75bf138
      Removing intermediate container 23f2f75bf138
      ---> 0ba76fec1693
      Step 9/11 : USER oracle
      ---> Running in dcb7192cd277
      Removing intermediate container dcb7192cd277
      ---> 7b9bdaa230d9
      Step 10/11 : WORKDIR $ORACLE_HOME
      ---> Running in 1fdfd3c2a584
      Removing intermediate container 1fdfd3c2a584
      ---> 5dd674f9a68b
      Step 11/11 : CMD ["/u01/oracle/container-scripts/createWLSDomain.sh"]
      ---> Running in f03451d26803
      Removing intermediate container f03451d26803
      ---> 29cc97f25353
      Successfully built 29cc97f25353
      Successfully tagged 12213-weblogic-domain-in-volume:latest


      Start Admin server

       
       
      ./run_admin_server.sh
      Context for docker build is /git/docker-images/OracleWebLogic/samples/12213-domain
      Export environment variables from the /git/docker-images/OracleWebLogic/samples/12213-domain/properties/domain.properties properties file
      env_arg: DOMAIN_NAME=myDomain
      env_arg: ADMIN_NAME=myadmin
      env_arg: ADMIN_LISTEN_PORT=7001
      env_arg: ADMIN_HOST=AdminContainer
      env_arg: ADMINISTRATION_PORT_ENABLED=false
      env_arg: ADMINISTRATION_PORT=9002
      env_arg: MANAGED_SERVER_PORT=8001
      env_arg: MANAGED_SERVER_NAME_BASE=MS
      env_arg: CONFIGURED_MANAGED_SERVER_COUNT=2
      env_arg: CLUSTER_NAME=cluster1
      env_arg: CLUSTER_TYPE=DYNAMIC
      env_arg: PRODUCTION_MODE=dev
      env_arg: DOMAIN_HOST_VOLUME=/app/domains
      The domain configuration will get persisted in the host volume: /app/domains
      administrationport=9002
      docker run -d -p 9001:7001 -p 9002:9002 --name AdminContainer --hostname AdminContainer -v /git/docker-images/OracleWebLogic/samples/12213-domain/properties:/u01/oracle/properties -v /app/domains:/u01/oracle/user_projects/domains -e DOMAIN_NAME=myDomain -e ADMIN_NAME=myadmin -e ADMIN_LISTEN_PORT=7001 -e ADMIN_HOST=AdminContainer -e ADMINISTRATION_PORT_ENABLED=false -e ADMINISTRATION_PORT=9002 -e MANAGED_SERVER_PORT=8001 -e MANAGED_SERVER_NAME_BASE=MS -e CONFIGURED_MANAGED_SERVER_COUNT=2 -e CLUSTER_NAME=cluster1 -e CLUSTER_TYPE=DYNAMIC -e PRODUCTION_MODE=dev 12213-weblogic-domain-in-volume
      bb8e16947737ca93b174397ef47fb2427b4ea349ed0dad93c609172006ad842a
      [centos@dave 12213-domain]$ docker ps
      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
      bb8e16947737 12213-weblogic-domain-in-volume "/u01/oracle/contain…" 5 seconds ago Up 4 seconds (health: starting) 0.0.0.0:9002->9002/tcp, 0.0.0.0:9001->7001/tcp AdminContainer




      Create Azure VM using Terraform

      $
      0
      0

       Install Terraform on Fedora

      https://learn.hashicorp.com/tutorials/terraform/install-cli

       

      sudo dnf install -y dnf-plugins-core

      sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo

      sudo dnf -y install terraform


      Install Azure CLI

      https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-yum


      Authenticate to Azure using service principal variables

      https://docs.microsoft.com/en-us/azure/developer/terraform/get-started-cloud-shell#authenticate-via-azure-service-principal

       

      Set subscription to use with Terraform

      https://docs.microsoft.com/en-us/azure/developer/terraform/get-started-cloud-shell#set-the-current-azure-subscription


      az account set --subscription="<subscription_id>"

      Export Azure principle secrets variables

       https://www.terraform.io/docs/providers/azurerm/guides/service_principal_client_secret.html

      $ export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
      $ export ARM_CLIENT_SECRET="00000000-0000-0000-0000-000000000000"
      $ export ARM_SUBSCRIPTION_ID="00000000-0000-0000-0000-000000000000"
      $ export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000"


      Create VM

      https://docs.microsoft.com/en-us/azure/developer/terraform/create-linux-virtual-machine-with-infrastructure

       

      # Configure the Microsoft Azure Provider
      provider "azurerm" {
      # The "feature" block is required for AzureRM provider 2.x.
      # If you're using version 1.x, the "features" block is not allowed.
      version = "~>2.0"
      features {}
      }

      # Create a resource group if it doesn't exist
      resource "azurerm_resource_group""myterraformgroup" {
      name = "myResourceGroup"
      location = "eastus"

      tags = {
      environment = "Terraform Demo"
      }
      }

      # Create virtual network
      resource "azurerm_virtual_network""myterraformnetwork" {
      name = "myVnet"
      address_space = ["10.0.0.0/16"]
      location = "eastus"
      resource_group_name = azurerm_resource_group.myterraformgroup.name

      tags = {
      environment = "Terraform Demo"
      }
      }

      # Create subnet
      resource "azurerm_subnet""myterraformsubnet" {
      name = "mySubnet"
      resource_group_name = azurerm_resource_group.myterraformgroup.name
      virtual_network_name = azurerm_virtual_network.myterraformnetwork.name
      address_prefixes = ["10.0.1.0/24"]
      }

      # Create public IPs
      resource "azurerm_public_ip""myterraformpublicip" {
      name = "myPublicIP"
      location = "eastus"
      resource_group_name = azurerm_resource_group.myterraformgroup.name
      allocation_method = "Dynamic"

      tags = {
      environment = "Terraform Demo"
      }
      }

      # Create Network Security Group and rule
      resource "azurerm_network_security_group""myterraformnsg" {
      name = "myNetworkSecurityGroup"
      location = "eastus"
      resource_group_name = azurerm_resource_group.myterraformgroup.name

      security_rule {
      name = "SSH"
      priority = 1001
      direction = "Inbound"
      access = "Allow"
      protocol = "Tcp"
      source_port_range = "*"
      destination_port_range = "22"
      source_address_prefix = "*"
      destination_address_prefix = "*"
      }

      tags = {
      environment = "Terraform Demo"
      }
      }

      # Create network interface
      resource "azurerm_network_interface""myterraformnic" {
      name = "myNIC"
      location = "eastus"
      resource_group_name = azurerm_resource_group.myterraformgroup.name

      ip_configuration {
      name = "myNicConfiguration"
      subnet_id = azurerm_subnet.myterraformsubnet.id
      private_ip_address_allocation = "Dynamic"
      public_ip_address_id = azurerm_public_ip.myterraformpublicip.id
      }

      tags = {
      environment = "Terraform Demo"
      }
      }

      # Connect the security group to the network interface
      resource "azurerm_network_interface_security_group_association""example" {
      network_interface_id = azurerm_network_interface.myterraformnic.id
      network_security_group_id = azurerm_network_security_group.myterraformnsg.id
      }

      # Generate random text for a unique storage account name
      resource "random_id""randomId" {
      keepers = {
      # Generate a new ID only when a new resource group is defined
      resource_group = azurerm_resource_group.myterraformgroup.name
      }

      byte_length = 8
      }

      # Create storage account for boot diagnostics
      resource "azurerm_storage_account""mystorageaccount" {
      name = "diag${random_id.randomId.hex}"
      resource_group_name = azurerm_resource_group.myterraformgroup.name
      location = "eastus"
      account_tier = "Standard"
      account_replication_type = "LRS"

      tags = {
      environment = "Terraform Demo"
      }
      }

      # Create (and display) an SSH key
      resource "tls_private_key""example_ssh" {
      algorithm = "RSA"
      rsa_bits = 4096
      }
      output "tls_private_key" { value = tls_private_key.example_ssh.private_key_pem }

      # Create virtual machine
      resource "azurerm_linux_virtual_machine""myterraformvm" {
      name = "myVM"
      location = "eastus"
      resource_group_name = azurerm_resource_group.myterraformgroup.name
      network_interface_ids = [azurerm_network_interface.myterraformnic.id]
      size = "Standard_DS1_v2"

      os_disk {
      name = "myOsDisk"
      caching = "ReadWrite"
      storage_account_type = "Premium_LRS"
      }

      source_image_reference {
      publisher = "Canonical"
      offer = "UbuntuServer"
      sku = "18.04-LTS"
      version = "latest"
      }

      computer_name = "myvm"
      admin_username = "azureuser"
      disable_password_authentication = true

      admin_ssh_key {
      username = "azureuser"
      public_key = tls_private_key.example_ssh.public_key_openssh
      }

      boot_diagnostics {
      storage_account_uri = azurerm_storage_account.mystorageaccount.primary_blob_endpoint
      }

      tags = {
      environment = "Terraform Demo"
      }
      }

       

       



      Create Weblogic Docker domain using docker-compose

      $
      0
      0

      Docker compose 

      https://docs.docker.com/compose/

       

      docker-compose.yml file

      https://docs.docker.com/compose/compose-file/

       

      Weblogic 12.2.1.4 Docker image 

      https://danielveselka.blogspot.com/2020/10/create-weblogic-12214-docker-image-with.html

       

      Weblogic Docker

      https://github.com/oracle/docker-images/tree/master/OracleWebLogic

      https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles/12.2.1.4

      http://www.oracle.com/us/products/middleware/cloud-app-foundation/weblogic/weblogic-server-on-docker-wp-2742665.pdf

       

      Some samples for Docker compose Weblogic domain

      https://github.com/bzon/docker-oracle-weblogic/blob/master/OracleWeblogic/weblogic-samples/1221-app-jms-domain/docker-compose.yml

      https://wlatricksntips.blogspot.com/2017/07/5-docker-compose-for-weblogic-cluster.html

       

      First attempt

      https://github.com/dveselka/weblogic/blob/master/docker-compose/docker-compose.yml

       

      version: "3.8"

      networks:
      &network wlsnet:
      driver: bridge

      services:
      adminserver:
      container_name: adminserver
      image: oracle/weblogic:12.2.1.4-generic
      restart: "no"
      environment:
      DOMAIN_NAME: myDomain
      ADMIN_NAME: myadmin
      ADMIN_LISTEN_PORT: 7001
      ADMIN_HOST: AdminContainer
      ADMINISTRATION_PORT_ENABLED: "false"
      ADMINISTRATION_PORT: 9002
      MANAGED_SERVER_PORT: 8001
      MANAGED_SERVER_NAME_BASE: MS
      CONFIGURED_MANAGED_SERVER_COUNT: 2
      CLUSTER_NAME: cluster1
      CLUSTER_TYPE: DYNAMIC
      PRODUCTION_MODE: dev
      DOMAIN_HOST_VOLUME: /app/domains
      PROPERTIES_FILE_DIR: "/u01/oracle/properties"
      PROPERTIES_FILE: /u01/oracle/properties/domain.properties
      ports:
      - "7001:7001"
      - "9002:9002"
      networks:
      - *network
      volumes:
      - "/git/weblogic/docker-compose/properties:/u01/oracle/properties"
      - "/app/domains:/u01/oracle/user_projects/domains"

      volumes:
      adminserver:


      docker-compose up
      Recreating adminserver ... done
      Attaching to adminserver
      adminserver | Domain Home is: /u01/oracle/user_projects/domains/base_domain
      adminserver | /u01/oracle/properties/domain_security.properties
      adminserver | A properties file with the username and password needs to be supplied.


      Docker compose with --verbose


      [dave@dave docker-compose]$ docker-compose  --verbose up
      compose.config.config.find: Using configuration files: ./docker-compose.yml
      docker.utils.config.find_config_file: Trying paths: ['/home/dave/.docker/config.json', '/home/dave/.dockercfg']
      docker.utils.config.find_config_file: No config file found
      docker.utils.config.find_config_file: Trying paths: ['/home/dave/.docker/config.json', '/home/dave/.dockercfg']
      docker.utils.config.find_config_file: No config file found
      docker.utils.config.find_config_file: Trying paths: ['/home/dave/.docker/config.json', '/home/dave/.dockercfg']
      docker.utils.config.find_config_file: No config file found
      urllib3.connectionpool._make_request: http://localhost:None "GET /version HTTP/1.1" 200 831
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/version HTTP/1.1" 200 831
      compose.cli.docker_client.get_client: docker-compose version 1.27.4, build 40524192
      docker-py version: 4.3.1
      CPython version: 3.7.7
      OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019
      compose.cli.docker_client.get_client: Docker base_url: http+docker://localhost
      compose.cli.docker_client.get_client: Docker version: Platform={'Name': ''}, Components=[{'Name': 'Engine', 'Version': '19.03.11', 'Details': {'ApiVersion': '1.40', 'Arch': 'amd64', 'BuildTime': '2020-06-07T00:00:00.000000000+00:00', 'Experimental': 'false', 'GitCommit': '42e35e6', 'GoVersion': 'go1.14.3', 'KernelVersion': '5.8.14-200.fc32.x86_64', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': '1.3.7', 'Details': {'GitCommit': '8fba4e9a7d01810a393d5d25a3621dc101981175'}}, {'Name': 'runc', 'Version': '1.0.0-rc10', 'Details': {'GitCommit': 'dc9208a3303feef5b3839f4323d9beb36df0a9dd'}}, {'Name': 'docker-init', 'Version': '0.18.0', 'Details': {'GitCommit': ''}}], Version=19.03.11, ApiVersion=1.40, MinAPIVersion=1.12, GitCommit=42e35e6, GoVersion=go1.14.3, Os=linux, Arch=amd64, KernelVersion=5.8.14-200.fc32.x86_64, BuildTime=2020-06-07T00:00:00.000000000+00:00
      compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('dockercompose_wlsnet')
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/networks/dockercompose_wlsnet HTTP/1.1" 404 53
      compose.cli.verbose_proxy.proxy_callable: docker info <- ()
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/info HTTP/1.1" 200 None
      compose.cli.verbose_proxy.proxy_callable: docker info -> {'Architecture': 'x86_64',
      'BridgeNfIp6tables': True,
      'BridgeNfIptables': True,
      'CPUSet': True,
      'CPUShares': True,
      'CgroupDriver': 'systemd',
      'ClusterAdvertise': '',
      'ClusterStore': '',
      'ContainerdCommit': {'Expected': '8fba4e9a7d01810a393d5d25a3621dc101981175',
      'ID': '8fba4e9a7d01810a393d5d25a3621dc101981175'},
      ...
      compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('docker-compose_wlsnet')
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/networks/docker-compose_wlsnet HTTP/1.1" 200 570
      compose.cli.verbose_proxy.proxy_callable: docker inspect_network -> {'Attachable': True,
      'ConfigFrom': {'Network': ''},
      'ConfigOnly': False,
      'Containers': {},
      'Created': '2020-10-18T10:11:09.446958986+02:00',
      'Driver': 'bridge',
      'EnableIPv6': False,
      'IPAM': {'Config': [{'Gateway': '172.18.0.1', 'Subnet': '172.18.0.0/16'}],
      'Driver': 'default',
      'Options': None},
      ...
      compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=docker-compose', 'com.docker.compose.oneoff=False']})
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/containers/json?limit=-1&all=0&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Ddocker-compose%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 3
      compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
      compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=dockercompose', 'com.docker.compose.oneoff=False']})
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/containers/json?limit=-1&all=0&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Ddockercompose%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 3
      compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
      compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=docker-compose', 'com.docker.compose.oneoff=False']})
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/containers/json?limit=-1&all=1&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Ddocker-compose%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 1519
      compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
      compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07')
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/containers/36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07/json HTTP/1.1" 200 None
      compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '',
      'Args': [],
      'Config': {'AttachStderr': False,
      'AttachStdin': False,
      'AttachStdout': False,
      'Cmd': ['/u01/oracle/container-scripts/createWLSDomain.sh'],
      'Domainname': '',
      'Entrypoint': None,
      'Env': ['DOMAIN_NAME=myDomain',
      'ADMIN_NAME=myadmin',
      ...
      compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=docker-compose', 'com.docker.compose.service=adminserver', 'com.docker.compose.oneoff=False']})
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/containers/json?limit=-1&all=1&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Ddocker-compose%22%2C+%22com.docker.compose.service%3Dadminserver%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 1519
      compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
      compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07')
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/containers/36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07/json HTTP/1.1" 200 None
      compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '',
      'Args': [],
      'Config': {'AttachStderr': False,
      'AttachStdin': False,
      'AttachStdout': False,
      'Cmd': ['/u01/oracle/container-scripts/createWLSDomain.sh'],
      'Domainname': '',
      'Entrypoint': None,
      'Env': ['DOMAIN_NAME=myDomain',
      'ADMIN_NAME=myadmin',
      ...
      compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('12214-weblogic-domain-generic')
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/images/12214-weblogic-domain-generic/json HTTP/1.1" 200 None
      compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64',
      'Author': 'Monica Riccelli <monica.riccelli@oracle.com>',
      'Comment': '',
      'Config': {'AttachStderr': False,
      'AttachStdin': False,
      'AttachStdout': False,
      'Cmd': ['/u01/oracle/container-scripts/createWLSDomain.sh'],
      'Domainname': '',
      'Entrypoint': None,
      'Env': ['PATH=/usr/java/jdk-8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/java/jdk-8/bin:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin',
      ...
      compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=docker-compose', 'com.docker.compose.service=adminserver', 'com.docker.compose.oneoff=False']})
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/containers/json?limit=-1&all=1&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Ddocker-compose%22%2C+%22com.docker.compose.service%3Dadminserver%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 1519
      compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
      compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('12214-weblogic-domain-generic')
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/images/12214-weblogic-domain-generic/json HTTP/1.1" 200 None
      compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64',
      'Author': 'Monica Riccelli <monica.riccelli@oracle.com>',
      'Comment': '',
      'Config': {'AttachStderr': False,
      'AttachStdin': False,
      'AttachStdout': False,
      'Cmd': ['/u01/oracle/container-scripts/createWLSDomain.sh'],
      'Domainname': '',
      'Entrypoint': None,
      'Env': ['PATH=/usr/java/jdk-8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/java/jdk-8/bin:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin',
      ...
      compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07')
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/containers/36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07/json HTTP/1.1" 200 None
      compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '',
      'Args': [],
      'Config': {'AttachStderr': False,
      'AttachStdin': False,
      'AttachStdout': False,
      'Cmd': ['/u01/oracle/container-scripts/createWLSDomain.sh'],
      'Domainname': '',
      'Entrypoint': None,
      'Env': ['DOMAIN_NAME=myDomain',
      'ADMIN_NAME=myadmin',
      ...
      compose.parallel.feed_queue: Pending: {<Service: adminserver>}
      compose.parallel.feed_queue: Starting producer thread for <Service: adminserver>
      Starting adminserver ...
      compose.parallel.feed_queue: Pending: {<Container: adminserver (36c57a)>}
      compose.parallel.feed_queue: Starting producer thread for <Container: adminserver (36c57a)>
      compose.cli.verbose_proxy.proxy_callable: docker attach <- ('36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07', stdout=True, stderr=True, stream=True)
      urllib3.connectionpool._make_request: http://localhost:None "POST /v1.40/containers/36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07/attach?logs=0&stdout=1&stderr=1&stream=1 HTTP/1.1" 101 0
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/containers/36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07/json HTTP/1.1" 200 None
      compose.cli.verbose_proxy.proxy_callable: docker attach -> <docker.types.daemon.CancellableStream object at 0x7fa40bb7f7d0>
      compose.cli.verbose_proxy.proxy_callable: docker start <- ('36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07')
      compose.parallel.feed_queue: Pending: set()
      compose.parallel.feed_queue: Pending: set()
      compose.parallel.feed_queue: Pending: set()
      compose.parallel.feed_queue: Pending: set()
      compose.parallel.feed_queue: Pending: set()
      compose.parallel.feed_queue: Pending: set()
      urllib3.connectionpool._make_request: http://localhost:None "POST /v1.40/containers/36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07/start HTTP/1.1" 204 0
      compose.cli.verbose_proxy.proxy_callable: docker start -> None
      Starting adminserver ... done
      compose.parallel.feed_queue: Pending: set()
      compose.parallel.parallel_execute_iter: Finished processing: <Service: adminserver>
      compose.parallel.feed_queue: Pending: set()
      Attaching to adminserver
      compose.cli.verbose_proxy.proxy_callable: docker events <- (filters={'label': ['com.docker.compose.project=docker-compose', 'com.docker.compose.oneoff=False']}, decode=True)
      adminserver | Domain Home is: /u01/oracle/user_projects/domains/base_domain
      adminserver | /u01/oracle/properties/domain_security.properties
      adminserver | A properties file with the username and password needs to be supplied.
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/events?filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Ddocker-compose%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 None
      compose.cli.verbose_proxy.proxy_callable: docker events -> <docker.types.daemon.CancellableStream object at 0x7fa40bb3e510>
      compose.cli.verbose_proxy.proxy_callable: docker wait <- ('36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07')
      compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07')
      urllib3.connectionpool._make_request: http://localhost:None "POST /v1.40/containers/36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07/wait HTTP/1.1" 200 None
      urllib3.connectionpool._make_request: http://localhost:None "GET /v1.40/containers/36c57a8bd9d6f6a1da699dc047755c741439da108868d0b28c950ccd50381f07/json HTTP/1.1" 200 None
      compose.cli.verbose_proxy.proxy_callable: docker wait -> {'Error': None, 'StatusCode': 0}
      adminserver exited with code 0
      compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '',
      'Args': [],
      'Config': {'AttachStderr': False,
      'AttachStdin': False,
      'AttachStdout': False,
      'Cmd': ['/u01/oracle/container-scripts/createWLSDomain.sh'],
      'Domainname': '',
      'Entrypoint': None,
      'Env': ['DOMAIN_NAME=myDomain',
      'ADMIN_NAME=myadmin',
      ...


      Install kubectl and minikube on Fedora 32

      $
      0
      0

       Install kubectl

      In your terminal run the following:

      curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

      [dave@dave tmp]$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
      % Total % Received % Xferd Average Speed Time Time Time Current
      Dload Upload Total Spent Left Speed
      100 41.0M 100 41.0M 0 0 13.7M 0 0:00:02 0:00:02 --:--:-- 13.7M
      [dave@dave tmp]$ ls -l ./kubectl
      -rw-rw-r--. 1 dave dave 43003904 Nov 3 22:15 ./kubectl

      chmod +x ./kubectl

      sudo mv ./kubectl /usr/local/bin/kubectl


      Check your Installation:

      kubectl version

      [dave@dave tmp]$ sudo mv ./kubectl /usr/local/kubectl
      [dave@dave tmp]$ kubectl version
      Client Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.8-beta.0", GitCommit:"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4", GitTreeState:"archive", BuildDate:"2020-01-29T00:00:00Z", GoVersion:"go1.14beta1", Compiler:"gc", Platform:"linux/amd64"}
      The connection to the server localhost:8080 was refused - did you specify the right host or port?

      See also official docs:
      https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux

       

      Install minikube

      https://minikube.sigs.k8s.io/docs/start/

      curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
      sudo rpm -ivh minikube-latest.x86_64.rpm

      First start

       

      [dave@dave simple-k8s]$ minikube delete
      🔥 Deleting "minikube" in kvm2 ...
      💀 Removed all traces of the "minikube" cluster.
      [dave@dave simple-k8s]$ minikube start
      😄 minikube v1.14.2 on Fedora 32
      ✨ Using the kvm2 driver based on user configuration
      💿 Downloading VM boot image ...
      > minikube-v1.14.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
      > minikube-v1.14.0.iso: 178.27 MiB / 178.27 MiB [ 100.00% 16.39 MiB p/s 11s
      👍 Starting control plane node minikube in cluster minikube
      💾 Downloading Kubernetes v1.19.2 preload ...
      > preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 486.33 MiB
      🔥 Creating kvm2 VM (CPUs=2, Memory=3900MB, Disk=20000MB) ...
      🔥 Deleting "minikube" in kvm2 ...
      🤦 StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: ensuring active networks: starting network default: virError(Code=89, Domain=47, Message='error from service: changeZoneOfInterface: COMMAND_FAILED: 'python-nftables' failed:
      JSON blob:
      {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"rule": {"family": "inet", "table": "firewalld", "chain": "filter_IN_libvirt_allow", "expr": [{"match": {"left": {"payload": {"protocol": "udp", "field": "dport"}}, "op": "==", "right": 67}}, {"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["new", "untracked"]}}}, {"accept": null}]}}}, {"add": {"rule": {"family": "inet", "table": "firewalld", "chain": "filter_IN_libvirt_allow", "expr": [{"match": {"left": {"payload": {"protocol": "udp", "field": "dport"}}, "op": "==", "right": 547}}, {"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["new", "untracked"]}}}, {"accept": null}]}}}, {"add": {"rule": {"family": "inet", "table": "firewalld", "chain": "filter_IN_libvirt_allow", "expr": [{"match": {"left": {"payload": {"protocol": "tcp", "field": "dport"}}, "op": "==", "right": 53}}, {"match": {"left": {"ct": {"key": "state"}}')
      🔥 Creating kvm2 VM (CPUs=2, Memory=3900MB, Disk=20000MB) ...
      😿 Failed to start kvm2 VM. Running "minikube delete" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: ensuring active networks: starting network default: virError(Code=89, Domain=47, Message='error from service: changeZoneOfInterface: COMMAND_FAILED: 'python-nftables' failed:
      JSON blob:
      {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"rule": {"family": "inet", "table": "firewalld", "chain": "filter_IN_libvirt_allow", "expr": [{"match": {"left": {"payload": {"protocol": "udp", "field": "dport"}}, "op": "==", "right": 67}}, {"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["new", "untracked"]}}}, {"accept": null}]}}}, {"add": {"rule": {"family": "inet", "table": "firewalld", "chain": "filter_IN_libvirt_allow", "expr": [{"match": {"left": {"payload": {"protocol": "udp", "field": "dport"}}, "op": "==", "right": 547}}, {"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["new", "untracked"]}}}, {"accept": null}]}}}, {"add": {"rule": {"family": "inet", "table": "firewalld", "chain": "filter_IN_libvirt_allow", "expr": [{"match": {"left": {"payload": {"protocol": "tcp", "field": "dport"}}, "op": "==", "right": 53}}, {"match": {"left": {"ct": {"key": "state"}}')

      ❌ Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: ensuring active networks: starting network default: virError(Code=89, Domain=47, Message='error from service: changeZoneOfInterface: COMMAND_FAILED: 'python-nftables' failed:
      JSON blob:
      {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"rule": {"family": "inet", "table": "firewalld", "chain": "filter_IN_libvirt_allow", "expr": [{"match": {"left": {"payload": {"protocol": "udp", "field": "dport"}}, "op": "==", "right": 67}}, {"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["new", "untracked"]}}}, {"accept": null}]}}}, {"add": {"rule": {"family": "inet", "table": "firewalld", "chain": "filter_IN_libvirt_allow", "expr": [{"match": {"left": {"payload": {"protocol": "udp", "field": "dport"}}, "op": "==", "right": 547}}, {"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["new", "untracked"]}}}, {"accept": null}]}}}, {"add": {"rule": {"family": "inet", "table": "firewalld", "chain": "filter_IN_libvirt_allow", "expr": [{"match": {"left": {"payload": {"protocol": "tcp", "field": "dport"}}, "op": "==", "right": 53}}, {"match": {"left": {"ct": {"key": "state"}}')

      😿 If the above advice does not help, please let us know:
      👉 https://github.com/kubernetes/minikube/issues/new/choose

       Trying with driver=docker

      [dave@dave simple-k8s]$ minikube start --driver=docker
      😄 minikube v1.14.2 on Fedora 32
      ❗ Both driver=docker and vm-driver=kvm2 have been set.

      Since vm-driver is deprecated, minikube will default to driver=docker.

      If vm-driver is set in the global config, please run "minikube config unset vm-driver" to resolve this warning.

      ✨ Using the docker driver based on user configuration
      👍 Starting control plane node minikube in cluster minikube
      🚜 Pulling base image ...
      🔥 Creating docker container (CPUs=2, Memory=3900MB) ...

      🧯 Docker is nearly out of disk space, which may cause deployments to fail! (94% of capacity)
      💡 Suggestion:

      Try at least one of the following to free up space on the device:

      1. Run "docker system prune" to remove unused docker data
      2. Increase the amount of memory allocated to Docker for Desktop via
      Docker icon > Preferences > Resources > Disk Image Size
      3. Run "minikube ssh -- docker system prune" if using the docker container runtime
      🍿 Related issue: https://github.com/kubernetes/minikube/issues/9024

      🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
      🔎 Verifying Kubernetes components...
      🌟 Enabled addons: storage-provisioner, default-storageclass

      ❗ /usr/bin/kubectl is version 1.15.8-beta.0, which may have incompatibilites with Kubernetes 1.19.2.
      💡 Want kubectl v1.19.2? Try 'minikube kubectl -- get pods -A'
      🏄 Done! kubectl is now configured to use "minikube" by default

       List pods on k8s cluster

      [dave@dave simple-k8s]$ kubectl get po -A
      NAMESPACE NAME READY STATUS RESTARTS AGE
      default client-pod 1/1 Running 0 62s
      kube-system coredns-f9fd979d6-t8ktc 1/1 Running 0 2m28s
      kube-system etcd-minikube 1/1 Running 0 2m26s
      kube-system kube-apiserver-minikube 1/1 Running 0 2m27s
      kube-system kube-controller-manager-minikube 1/1 Running 0 2m27s
      kube-system kube-proxy-tg8zg 1/1 Running 0 2m28s
      kube-system kube-scheduler-minikube 1/1 Running 0 2m27s
      kube-system storage-provisioner 1/1 Running 1 2m31s


      K8s Ingress on minikube

      $
      0
      0


      HOWTO

      https://www.udemy.com/course/docker-and-kubernetes-the-complete-guide/

      https://kubernetes.io/docs/concepts/services-networking/ingress/

      https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/

      https://github.com/kubernetes/ingress-nginx

      https://www.joyfulbikeshedding.com/blog/2018-03-26-studying-the-kubernetes-ingress-system.html

      https://minikube.sigs.k8s.io/docs/drivers/docker/#known-issues

       

      GitHub repos

      https://github.com/dveselka/devops-k8s/tree/main/complex

      https://github.com/StephenGrider/multi-k8s

       

      Start a cluster using the docker driver:

      minikube start --driver=docker

      To make docker the default driver:

      minikube config set driver docker
       
      [dave@dave complex]$ minikube config set driver docker
      ❗ These changes will take effect upon a minikube delete and then a minikube start
      [dave@dave complex]$ minikube delete
      🔥 Deleting "minikube" in docker ...
      🔥 Deleting container "minikube" ...
      🔥 Removing /home/dave/.minikube/machines/minikube ...
      💀 Removed all traces of the "minikube" cluster.
      [dave@dave complex]$ minikube start
      😄 minikube v1.14.2 on Fedora 32
      ✨ Using the docker driver based on user configuration
      👍 Starting control plane node minikube in cluster minikube
      🔥 Creating docker container (CPUs=2, Memory=3900MB) ...
       🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
      🔎 Verifying Kubernetes components...
      🌟 Enabled addons: storage-provisioner, default-storageclass

      ❗ /usr/bin/kubectl is version 1.15.8-beta.0, which may have incompatibilites with Kubernetes 1.19.2.
      💡 Want kubectl v1.19.2? Try 'minikube kubectl -- get pods -A'
      🏄 Done! kubectl is now configured to use "minikube" by default
       

      Check type of container

      [dave@dave complex]$ docker info --format '{{.OSType}}'
      linux


      Provider specific steps

      https://kubernetes.github.io/ingress-nginx/deploy/#provider-specific-steps

      Google Cloud https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.0/deploy/static/provider/cloud/deploy.yaml


      Minikube

      [dave@dave complex]$ minikube addons enable ingress
      🔎 Verifying ingress addon...
      🌟 The 'ingress' addon is enabled

       

      Apply config files

      kubectl apply -f k8s/
      service/client-cluster-ip-service created
      deployment.apps/client-deployment created
      persistentvolumeclaim/database-persistent-volume-claim created
      ingress.networking.k8s.io/ingress-service created
      service/postgres-cluster-ip-service created
      deployment.apps/postgres-deployment created
      service/redis-cluster-ip-service created
      deployment.apps/redis-deployment created
      service/server-cluster-ip-service created
      deployment.apps/server-deployment created
      deployment.apps/worker-deployment created

       Check HTTP connection

      [dave@dave complex]$ minikube ip
      192.168.49.2
      [dave@dave complex]$ wget 192.168.49.2
      --2020-11-09 14:13:35-- http://192.168.49.2/
      Connecting to 192.168.49.2:80... connected.
      HTTP request sent, awaiting response... 200 OK

       Add POSTGRESS password via k8s secret

      [dave@dave complex]$ kubectl create secret generic pgpassword --from-literal PGPASSWORD=password123
      secret/pgpassword created

       Check app in k8s dashboard

      [dave@dave complex]$ minikube dashboard
      🔌 Enabling dashboard ...
      🤔 Verifying dashboard health ...
      🚀 Launching proxy ...
      🤔 Verifying proxy health ...
      🎉 Opening http://127.0.0.1:42189/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...  

      k8s dashboard 



      Install Docker application into Google Cloud k8s

      $
      0
      0

       HOWTO

      https://www.udemy.com/course/docker-and-kubernetes-the-complete-guide/

      GitHub repo

      https://github.com/StephenGrider/multi-k8s/blob/master/.travis.yml

      https://github.com/dveselka/devops-k8s/blob/main/.travis.yml

       

      Install Google Cloud SDK CLI

      [dave@dave git]$ sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
      > [google-cloud-sdk]
      > name=Google Cloud SDK
      > baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64
      > enabled=1
      > gpgcheck=1
      > repo_gpgcheck=1
      > gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
      > https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
      > EOM
      [sudo] password for dave:
      [google-cloud-sdk]
      name=Google Cloud SDK
      baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64
      enabled=1
      gpgcheck=1
      repo_gpgcheck=1
      gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
      https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
      [dave@dave git]$ sudo dnf install google-cloud-sdk
      Google Cloud SDK 364 B/s | 454 B 00:01
      Google Cloud SDK 15 kB/s | 1.8 kB 00:00
      Importing GPG key 0xA7317B0F:
      Userid : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
      Fingerprint: D0BC 747F D8CA F711 7500 D6FA 3746 C208 A731 7B0F
      From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
      Is this ok [y/N]: y
      Importing GPG key 0xBA07F4FB:
      Userid : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
      Fingerprint: 54A6 47F9 048D 5688 D7DA 2ABE 6A03 0B21 BA07 F4FB
      From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
      Is this ok [y/N]: y
      Google Cloud SDK 6.7 kB/s | 975 B 00:00
      Importing GPG key 0x3E1BA8D5:
      Userid : "Google Cloud Packages RPM Signing Key <gc-team@google.com>"
      Fingerprint: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
      From : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
      Is this ok [y/N]: y
      Google Cloud SDK 7.0 MB/s | 22 MB 00:03
      Last metadata expiration check: 0:00:06 ago on Mon 09 Nov 2020 07:32:03 PM CET.
      Dependencies resolved.
      ===================================================================================================================================================================================================
      Package Architecture Version Repository Size
      ===================================================================================================================================================================================================
      Installing:
      google-cloud-sdk x86_64 317.0.0-1 google-cloud-sdk 72 M

      Transaction Summary
      ===================================================================================================================================================================================================
      Install 1 Package

      Total download size: 72 M
      Installed size: 359 M
      Is this ok [y/N]:


       

      Init Google Cloud CLI

      [dave@dave git]$ gcloud init
      Welcome! This command will take you through the configuration of gcloud.

      Your current configuration has been set to: [default]

      You can skip diagnostics next time by using the following flag:
      gcloud init --skip-diagnostics

      Network diagnostic detects and fixes local network connection issues.
      Checking network connection...done.
      Reachability Check passed.
      Network diagnostic passed (1/1 checks passed).

      You must log in to continue. Would you like to log in (Y/n)?


      Install Travis CLI

           $ sudo dnf install ruby 

      $ ruby --version
      ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux]

      git clone https://github.com/travis-ci/travis.rb
      Cloning into 'travis.rb'...

      Install travis

      $ gem install travis  
      Fetching multipart-post-2.1.1.gem
      Fetching ruby2_keywords-0.0.2.gem
      Fetching faraday-1.1.0.gem
      Fetching faraday_middleware-1.0.0.gem
      Fetching highline-2.0.3.gem
      Fetching concurrent-ruby-1.1.7.gem
      Fetching i18n-1.8.5.gem
      Fetching thread_safe-0.3.6.gem
      Fetching tzinfo-1.2.8.gem
      Fetching minitest-5.14.2.gem
      Fetching activesupport-5.2.4.4.gem
      Fetching multi_json-1.15.0.gem
      Fetching public_suffix-4.0.6.gem
      Fetching addressable-2.7.0.gem
      Fetching net-http-persistent-2.9.4.gem
      Fetching net-http-pipeline-1.0.1.gem
      Fetching travis-1.10.0.gem
      Fetching gh-0.18.0.gem
      Fetching launchy-2.4.3.gem
      Fetching json_pure-2.3.1.gem
      Fetching websocket-1.2.8.gem
      Fetching pusher-client-0.6.2.gem
      Successfully installed multipart-post-2.1.1
      Successfully installed ruby2_keywords-0.0.2
      Successfully installed faraday-1.1.0
      Successfully installed faraday_middleware-1.0.0
      Successfully installed highline-2.0.3
      Successfully installed concurrent-ruby-1.1.7

      HEADS UP! i18n 1.1 changed fallbacks to exclude default locale.
      But that may break your application.

      If you are upgrading your Rails application from an older version of Rails:

      Please check your Rails app for 'config.i18n.fallbacks = true'.
      If you're using I18n (>= 1.1.0) and Rails (< 5.2.2), this should be
      'config.i18n.fallbacks = [I18n.default_locale]'.
      If not, fallbacks will be broken in your app by I18n 1.1.x.

      If you are starting a NEW Rails application, you can ignore this notice.

      For more info see:
      https://github.com/svenfuchs/i18n/releases/tag/v1.1.0

      Successfully installed i18n-1.8.5
      Successfully installed thread_safe-0.3.6
      Successfully installed tzinfo-1.2.8
      Successfully installed minitest-5.14.2
      Successfully installed activesupport-5.2.4.4
      Successfully installed multi_json-1.15.0
      Successfully installed public_suffix-4.0.6
      Successfully installed addressable-2.7.0
      Successfully installed net-http-persistent-2.9.4
      Successfully installed net-http-pipeline-1.0.1
      Successfully installed gh-0.18.0
      Successfully installed launchy-2.4.3
      Successfully installed json_pure-2.3.1
      Successfully installed websocket-1.2.8
      Successfully installed pusher-client-0.6.2
      Successfully installed travis-1.10.0
      Parsing documentation for multipart-post-2.1.1
      Installing ri documentation for multipart-post-2.1.1
      Parsing documentation for ruby2_keywords-0.0.2
      Installing ri documentation for ruby2_keywords-0.0.2
      Parsing documentation for faraday-1.1.0
      Installing ri documentation for faraday-1.1.0
      Parsing documentation for faraday_middleware-1.0.0
      Installing ri documentation for faraday_middleware-1.0.0
      Parsing documentation for highline-2.0.3
      Installing ri documentation for highline-2.0.3
      Parsing documentation for concurrent-ruby-1.1.7
      Installing ri documentation for concurrent-ruby-1.1.7
      Parsing documentation for i18n-1.8.5
      Installing ri documentation for i18n-1.8.5
      Parsing documentation for thread_safe-0.3.6
      Installing ri documentation for thread_safe-0.3.6
      Parsing documentation for tzinfo-1.2.8
      Installing ri documentation for tzinfo-1.2.8
      Parsing documentation for minitest-5.14.2
      Installing ri documentation for minitest-5.14.2
      Parsing documentation for activesupport-5.2.4.4
      Installing ri documentation for activesupport-5.2.4.4
      Parsing documentation for multi_json-1.15.0
      Installing ri documentation for multi_json-1.15.0
      Parsing documentation for public_suffix-4.0.6
      Installing ri documentation for public_suffix-4.0.6
      Parsing documentation for addressable-2.7.0
      Installing ri documentation for addressable-2.7.0
      Parsing documentation for net-http-persistent-2.9.4
      Installing ri documentation for net-http-persistent-2.9.4
      Parsing documentation for net-http-pipeline-1.0.1
      Installing ri documentation for net-http-pipeline-1.0.1
      Parsing documentation for gh-0.18.0
      Installing ri documentation for gh-0.18.0
      Parsing documentation for launchy-2.4.3
      Installing ri documentation for launchy-2.4.3
      Parsing documentation for json_pure-2.3.1
      Installing ri documentation for json_pure-2.3.1
      Parsing documentation for websocket-1.2.8
      Installing ri documentation for websocket-1.2.8
      Parsing documentation for pusher-client-0.6.2
      Installing ri documentation for pusher-client-0.6.2
      Parsing documentation for travis-1.10.0
      Installing ri documentation for travis-1.10.0
      Done installing documentation for multipart-post, ruby2_keywords, faraday, faraday_middleware, highline, concurrent-ruby, i18n, thread_safe, tzinfo, minitest, activesupport, multi_json, public_suffix, addressable, net-http-persistent, net-http-pipeline, gh, launchy, json_pure, websocket, pusher-client, travis after 15 seconds
      22 gems installed

       
      Login to travis

      [dave@dave bin]$ travis login
      Shell completion not installed. Would you like to install it now? |y| y
      We need your GitHub login to identify you.
      This information will not be sent to Travis CI, only to api.github.com.
      The password will not be displayed.

      Try running with --github-token or --auto if you don't want to enter your password anyway.

      Username: dveselka
      Password for dveselka: ********
      Successfully logged in as dveselka!

      Encrypt service account for Travis
      [dave@dave complex]$ travis encrypt-file service-account.json -r dveselka/devops-k8s
      encrypting service-account.json for dveselka/devops-k8s
      storing result as service-account.json.enc
      storing secure env variables for decryption

      Please add the following to your build script (before_install stage in your .travis.yml, for instance):

      openssl aes-256-cbc -K $encrypted_9f3b5599b056_key -iv $encrypted_9f3b5599b056_iv -in service-account.json.enc -out service-account.json -d

      Pro Tip: You can add it automatically by running with --add.

      Make sure to add service-account.json.enc to the git repository.
      Make sure not to add service-account.json to the git repository.
      Commit all changes to your .travis.yml.

      Configure GCP project names and zone

       https://cloud.google.com/compute/docs/regions-zones 

       

      Create account on https://hub.docker.com/

       

       Check Travis builds  

      https://travis-ci.org/github/dveselka/devops-k8s/builds

       Add k8s secret using GCP shell

       

        Set Postgress password

      daniel_veselka@cloudshell:~ (genial-acronym-295114)$  kubectl create secret generic pgpassword --from-literal POSTGRESS_PASSWORD=password123
      secret/pgpassword created

      Install Ingress using Helm on Google cloud

      $
      0
      0

       HOWTO

       https://helm.sh/docs/intro/install/#from-script

       https://helm.sh/docs/intro/using_helm/

       https://github.com/kubernetes/ingress-nginx

      https://kubernetes.github.io/ingress-nginx/deploy/#using-helm


      Install Helm

      daniel_veselka@cloudshell:~ (genial-acronym-295114)$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
      daniel_veselka@cloudshell:~ (genial-acronym-295114)$ chmod 700 get_helm.sh
      daniel_veselka@cloudshell:~ (genial-acronym-295114)$ ./get_helm.sh
      Helm v3.4.0 is available. Changing from version v3.2.1.
      Downloading https://get.helm.sh/helm-v3.4.0-linux-amd64.tar.gz
      Verifying checksum... Done.
      Preparing to install helm into /usr/local/bin
      helm installed into /usr/local/bin/helm
      daniel_veselka@cloudshell:~ (genial-acronym-295114)$ helm version
      version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"clean", GoVersion:"go1.14.10"}

      Install Ingress

      helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
      helm install my-release ingress-nginx/ingress-nginx

      daniel_veselka@cloudshell:~ (genial-acronym-295114)$ POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
      daniel_veselka@cloudshell:~ (genial-acronym-295114)$ kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version
      -------------------------------------------------------------------------------
      NGINX Ingress controller
      Release: v0.41.0
      Build: f3a6b809bd4bb6608266b35cf5b8423bf107d7bc
      Repository: https://github.com/kubernetes/ingress-nginx
      nginx version: nginx/1.19.4

      -------------------------------------------------------------------------------

      Configure user access - service accounts and roles

      k8s namespaces on GCP

      daniel_veselka@cloudshell:~ (genial-acronym-295114)$ kubectl get namespaces
      NAME STATUS AGE
      default Active 15h
      kube-node-lease Active 15h
      kube-public Active 15h
      kube-system Active 15h

      Configure roles - not required with Helm3 - tiller removed

      daniel_veselka@cloudshell:~ (genial-acronym-295114)$ kubectl create serviceaccount --namespace kube-system tiller
      serviceaccount/tiller created
      daniel_veselka@cloudshell:~ (genial-acronym-295114)$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-systen:tiller
      clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created


      HTTPS configuration

      • https://github.com/StephenGrider/multi-k8s/blob/master/k8s/certificate.yaml
      • https://github.com/StephenGrider/multi-k8s/blob/master/k8s/issuer.yaml

      Deploy application on Google Cloud k8s using Travis

      $
      0
      0

      Google Cloud code IDE extension

      $
      0
      0

       https://cloud.google.com/code/docs/vscode

       

      Install VS Code https://code.visualstudio.com/docs/setup/linux#_installation

      rpm --import https://packages.microsoft.com/keys/microsoft.asc
      sh -c 'echo -e "[code]\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc"> /etc/yum.repos.d/vscode.repo'
      dnf check-update
      dnf install code

      Install Google Code extension into VS Code

      Open k8s project 

       

      Install Jenkins via Ansible

      $
      0
      0

      HOWTO 


      • Install Jenkins using Galaxy role  https://training.galaxyproject.org/training-material/topics/admin/tutorials/jenkins/tutorial.html
      • Configure nodes  https://www.howtoforge.com/tutorial/ubuntu-jenkins-master-slave/



       Clone repository repo with plapybook

      https://github.com/dveselka/devops-ansible/tree/master/jenkins

      [dave@dave ~]$ git clone https://github.com/dveselka/devops-ansible

      Install Ansible Galaxy roles
      [dave@dave ~]$ cd devops-ansible/
      [dave@dave devops-ansible]$ cd jenkins/
      [dave@dave jenkins]$ ls
      create_jenkins.yml dev_vars.yml README.md requirements.yml
      [dave@dave jenkins]$ ansible-galaxy install -p roles -r requirements.yml
      - downloading role 'java', owned by geerlingguy
      - downloading role from https://github.com/geerlingguy/ansible-role-java/archive/1.10.0.tar.gz
      - extracting geerlingguy.java to /home/dave/devops-ansible/jenkins/roles/geerlingguy.java
      - geerlingguy.java (1.10.0) was installed successfully
      - downloading role 'jenkins', owned by geerlingguy
      - downloading role from https://github.com/geerlingguy/ansible-role-jenkins/archive/4.3.0.tar.gz
      - extracting geerlingguy.jenkins to /home/dave/devops-ansible/jenkins/roles/geerlingguy.jenkins
      - geerlingguy.jenkins (4.3.0) was installed successfully


      Check Java version
      [dave@dave jenkins]$ java -version
      java version "11.0.9" 2020-10-20 LTS
      Java(TM) SE Runtime Environment 18.9 (build 11.0.9+7-LTS)
      Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.9+7-LTS, mixed mode)



      Run Ansible playbook
      [dave@dave jenkins]$ ansible-playbook -K  create_jenkins.yml 
      BECOME password:
      [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

      PLAY [Install Jenkins on localhost] *****************************************************************************************************************************************************************************

      TASK [Gathering Facts] ******************************************************************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.java : Include OS-specific variables for Fedora or FreeBSD.] **********************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.java : Include version-specific variables for CentOS/RHEL.] ***********************************************************************************************************************************
      skipping: [localhost]

      TASK [geerlingguy.java : Include version-specific variables for Ubuntu.] ****************************************************************************************************************************************
      skipping: [localhost]

      TASK [geerlingguy.java : Include version-specific variables for Debian.] ****************************************************************************************************************************************
      skipping: [localhost]

      TASK [geerlingguy.java : Define java_packages.] *****************************************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.java : include_tasks] *************************************************************************************************************************************************************************
      included: /home/dave/devops-ansible/jenkins/roles/geerlingguy.java/tasks/setup-RedHat.yml for localhost

      TASK [geerlingguy.java : Ensure Java is installed.] *************************************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.java : include_tasks] *************************************************************************************************************************************************************************
      skipping: [localhost]

      TASK [geerlingguy.java : include_tasks] *************************************************************************************************************************************************************************
      skipping: [localhost]

      TASK [geerlingguy.java : Set JAVA_HOME if configured.] **********************************************************************************************************************************************************
      changed: [localhost]

      TASK [geerlingguy.jenkins : Include OS-Specific variables] ******************************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.jenkins : Define jenkins_repo_url] ************************************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.jenkins : Define jenkins_repo_key_url] ********************************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.jenkins : Define jenkins_pkg_url] *************************************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.jenkins : include_tasks] **********************************************************************************************************************************************************************
      included: /home/dave/devops-ansible/jenkins/roles/geerlingguy.jenkins/tasks/setup-RedHat.yml for localhost

      TASK [geerlingguy.jenkins : Ensure dependencies are installed.] *************************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.jenkins : Ensure Jenkins repo is installed.] **************************************************************************************************************************************************
      changed: [localhost]

      TASK [geerlingguy.jenkins : Add Jenkins repo GPG key.] **********************************************************************************************************************************************************
      changed: [localhost]

      TASK [geerlingguy.jenkins : Download specific Jenkins version.] *************************************************************************************************************************************************
      skipping: [localhost]

      TASK [geerlingguy.jenkins : Check if we downloaded a specific version of Jenkins.] ******************************************************************************************************************************
      skipping: [localhost]

      TASK [geerlingguy.jenkins : Install our specific version of Jenkins.] *******************************************************************************************************************************************
      skipping: [localhost]

      TASK [geerlingguy.jenkins : Ensure Jenkins is installed.] *******************************************************************************************************************************************************
      changed: [localhost]

      TASK [geerlingguy.jenkins : include_tasks] **********************************************************************************************************************************************************************
      skipping: [localhost]

      TASK [geerlingguy.jenkins : include_tasks] **********************************************************************************************************************************************************************
      included: /home/dave/devops-ansible/jenkins/roles/geerlingguy.jenkins/tasks/settings.yml for localhost

      TASK [geerlingguy.jenkins : Check if jenkins_init_file exists.] *************************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.jenkins : Ensure jenkins_init_file exists.] ***************************************************************************************************************************************************
      skipping: [localhost]

      TASK [geerlingguy.jenkins : Modify variables in init file.] *****************************************************************************************************************************************************
      changed: [localhost] => (item={'option': 'JENKINS_ARGS', 'value': '--prefix='})
      changed: [localhost] => (item={'option': 'JENKINS_JAVA_OPTIONS', 'value': '-Xmx4096M'})

      TASK [geerlingguy.jenkins : Ensure jenkins_home /var/lib/jenkins exists.] ***************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.jenkins : Set the Jenkins home directory.] ****************************************************************************************************************************************************
      changed: [localhost]

      TASK [geerlingguy.jenkins : Immediately restart Jenkins on init config changes.] ********************************************************************************************************************************
      changed: [localhost]

      TASK [geerlingguy.jenkins : Set HTTP port in Jenkins config.] ***************************************************************************************************************************************************
      changed: [localhost]

      TASK [geerlingguy.jenkins : Create custom init scripts directory.] **********************************************************************************************************************************************
      changed: [localhost]

      TASK [geerlingguy.jenkins : Configure proxy config for Jenkins] *************************************************************************************************************************************************
      skipping: [localhost]

      RUNNING HANDLER [geerlingguy.jenkins : configure default users] *************************************************************************************************************************************************
      changed: [localhost]

      TASK [geerlingguy.jenkins : Immediately restart Jenkins on http or user changes.] *******************************************************************************************************************************
      changed: [localhost]

      TASK [geerlingguy.jenkins : Ensure Jenkins is started and runs on startup.] *************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.jenkins : Wait for Jenkins to start up before proceeding.] ************************************************************************************************************************************
      FAILED - RETRYING: Wait for Jenkins to start up before proceeding. (60 retries left).
      FAILED - RETRYING: Wait for Jenkins to start up before proceeding. (59 retries left).
      FAILED - RETRYING: Wait for Jenkins to start up before proceeding. (58 retries left).
      ok: [localhost]

      TASK [geerlingguy.jenkins : Get the jenkins-cli jarfile from the Jenkins server.] *******************************************************************************************************************************
      changed: [localhost]

      TASK [geerlingguy.jenkins : Remove Jenkins security init scripts after first startup.] **************************************************************************************************************************
      changed: [localhost]

      TASK [geerlingguy.jenkins : include_tasks] **********************************************************************************************************************************************************************
      included: /home/dave/devops-ansible/jenkins/roles/geerlingguy.jenkins/tasks/plugins.yml for localhost

      TASK [geerlingguy.jenkins : Get Jenkins admin password from file.] **********************************************************************************************************************************************
      skipping: [localhost]

      TASK [geerlingguy.jenkins : Set Jenkins admin password fact.] ***************************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.jenkins : Create Jenkins updates directory.] **************************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.jenkins : Download current plugin updates from Jenkins update site.] **************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.jenkins : Remove first and last line from json file.] *****************************************************************************************************************************************
      ok: [localhost]

      TASK [geerlingguy.jenkins : Install Jenkins plugins using password.] ********************************************************************************************************************************************

      PLAY RECAP ******************************************************************************************************************************************************************************************************
      localhost : ok=34 changed=13 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0


      Configure nodes 

      https://www.howtoforge.com/tutorial/ubuntu-jenkins-master-slave/



      Viewing all 181 articles
      Browse latest View live