You can further expand and configure your cluster to your needs after installing OpenShift Container Platform, including setting up user preparations. An OAuth server is integrated into the control plane of the OpenShift Container Platform. To authenticate oneself to the API, administrators and developers need to obtain OAuth access tokens.
After installing your cluster, you can configure OAuth as an administrator to designate an identity provider. On your cluster, there is only the kubeadmin user by default. You must add a custom resource (CR) to the cluster that contains information about the identity provider you want to provide.
Identity Providers
You can configure the following types of identity providers:
- htpasswd: Configure the htpasswd identity provider to validate usernames and passwords against a flat file generated using htpasswd.
- Keystone: Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database.
- LDAP: Configure the ldap identity provider to validate usernames and passwords against an LDAPv3 server, using simple bind authentication.
- Basic authentication: Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism.
- Request header: Configure a request-header identity provider to identify users from request header values, such as X-Remote-User. It is typically used in combination with an authenticating proxy, which sets the request header value.
- GitHub: Configure a GitHub identity provider to validate usernames and passwords against GitHub or GitHub Enterprise’s OAuth authentication server.
- GitLab- Configure a GitLab identity provider to use GitLab.com or any other GitLab instance as an identity provider.
- Google: Configure a google identity provider using Google’s OpenID Connect integration.
- OpenID Connect: Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow.
Parameters
name: The provider’s name is prefixed to provider user names to form an identity name.
mappingMethod: Defines how new identities are mapped to users when they log in. Enter one of the following values:
- claim: The default value. Gives a user their desired username based on their identify. fails if that username is already mapped to another identity by another user.
- lookup: does not automatically furnish users or identities; instead, it looks up an existing identity, user identity mapping, and user. This enables cluster managers to create users and identities through an external mechanism or manually. You have to supply people manually while using this strategy.
- add: gives a user their desired username based on their identify. The identity is mapped to the current user, adding to any previous identity mappings for the user, if there is currently an existing user with that username. Necessary for setting up several identity providers to map to the same usernames and identify the same set of users. When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add.
Identity provider CR
The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider.
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: my_identity_provider #This provider name is prefixed to provider usernames to form an identity name.
mappingMethod: claim #Controls how mappings are established between this provider’s identities and User objects.
type: HTPasswd
htpasswd:
fileData:
name: htpass-secret #An existing secret containing a file generated using htpasswd.
Role-based Access Control
Whether a person is permitted to carry out a specific action within a project is determined by role-based access control, or RBAC, objects.
The OpenShift Container Platform and all projects may be controlled by cluster administrators using the cluster roles and bindings.
Developers can manage who has access to their projects by using local roles and bindings. Keep in mind that authorization is distinct from authentication, which focuses more on establishing the identity of the person doing the operation.
- Cluster RBAC: Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles.
- Local RBAC: Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles.
A cluster role binding is a binding that occurs at the cluster level. A role binding exists on the project level. In order for a user to view the project, the cluster role view must be tied to them via a local role. Create local roles only if a cluster role does not provide the necessary rights for a certain case.
This two-level hierarchy enables reuse across several projects via cluster roles while also allowing modification within specific projects via local roles.
During evaluation, both cluster and local role bindings are used.
In Red Hat OpenShift, there are several default cluster roles that you can bind to users and groups either cluster-wide or locally. Here are some of the key default cluster roles:
- admin: A project manager role that allows viewing and modifying any resource within a project, except for quotas.
- basic-user: A role that grants basic information access about projects and users.
- cluster-admin: A super-user role that can perform any action on any resource across all projects.
- cluster-status: A role that allows viewing basic cluster status information.
- edit: A role that allows making changes to most resources within a project.
- view: A role that allows viewing most resources within a project without making changes.
These roles help manage access and permissions effectively within the OpenShift environment. If you need more detailed information or specific permissions associated with each role, you can refer to the OpenShift documentation.
Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin, plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin.
View cluster roles and bindings
$ oc describe clusterrole.rbac
Name: admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
.packages.apps.redhat.com [] [] [* create update patch delete get list watch]
imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch]
imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch]
secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update]
buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch]
buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch]
buildlogs [] [] [create delete deletecollection get list patch update watch get list watch]
deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch]
deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch]
imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch]
imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch]
imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch]
processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch]
routes [] [] [create delete deletecollection get list patch update watch get list watch]
templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch]
templateinstances [] [] [create delete deletecollection get list patch update watch get list watch]
templates [] [] [create delete deletecollection get list patch update watch get list watch]
deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch]
deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch]
buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch]
imagestreams/secrets [] [] [create delete deletecollection get list patch update watch]
rolebindings [] [] [create delete deletecollection get list patch update watch]
roles [] [] [create delete deletecollection get list patch update watch]
rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch]
roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch]
imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch]
rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch]
roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch]
networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch]
networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch]
configmaps [] [] [create delete deletecollection patch update get list watch]
endpoints [] [] [create delete deletecollection patch update get list watch]
persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch]
pods [] [] [create delete deletecollection patch update get list watch]
replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch]
replicationcontrollers [] [] [create delete deletecollection patch update get list watch]
services [] [] [create delete deletecollection patch update get list watch]
daemonsets.apps [] [] [create delete deletecollection patch update get list watch]
deployments.apps/scale [] [] [create delete deletecollection patch update get list watch]
deployments.apps [] [] [create delete deletecollection patch update get list watch]
replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch]
replicasets.apps [] [] [create delete deletecollection patch update get list watch]
statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch]
statefulsets.apps [] [] [create delete deletecollection patch update get list watch]
horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch]
cronjobs.batch [] [] [create delete deletecollection patch update get list watch]
jobs.batch [] [] [create delete deletecollection patch update get list watch]
daemonsets.extensions [] [] [create delete deletecollection patch update get list watch]
deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch]
deployments.extensions [] [] [create delete deletecollection patch update get list watch]
ingresses.extensions [] [] [create delete deletecollection patch update get list watch]
replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch]
replicasets.extensions [] [] [create delete deletecollection patch update get list watch]
replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch]
poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch]
deployments.apps/rollback [] [] [create delete deletecollection patch update]
deployments.extensions/rollback [] [] [create delete deletecollection patch update]
catalogsources.operators.coreos.com [] [] [create update patch delete get list watch]
clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch]
installplans.operators.coreos.com [] [] [create update patch delete get list watch]
packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch]
subscriptions.operators.coreos.com [] [] [create update patch delete get list watch]
buildconfigs/instantiate [] [] [create]
buildconfigs/instantiatebinary [] [] [create]
builds/clone [] [] [create]
deploymentconfigrollbacks [] [] [create]
deploymentconfigs/instantiate [] [] [create]
deploymentconfigs/rollback [] [] [create]
imagestreamimports [] [] [create]
localresourceaccessreviews [] [] [create]
localsubjectaccessreviews [] [] [create]
podsecuritypolicyreviews [] [] [create]
podsecuritypolicyselfsubjectreviews [] [] [create]
podsecuritypolicysubjectreviews [] [] [create]
resourceaccessreviews [] [] [create]
routes/custom-host [] [] [create]
subjectaccessreviews [] [] [create]
subjectrulesreviews [] [] [create]
deploymentconfigrollbacks.apps.openshift.io [] [] [create]
deploymentconfigs.apps.openshift.io/instantiate [] [] [create]
deploymentconfigs.apps.openshift.io/rollback [] [] [create]
localsubjectaccessreviews.authorization.k8s.io [] [] [create]
localresourceaccessreviews.authorization.openshift.io [] [] [create]
localsubjectaccessreviews.authorization.openshift.io [] [] [create]
resourceaccessreviews.authorization.openshift.io [] [] [create]
subjectaccessreviews.authorization.openshift.io [] [] [create]
subjectrulesreviews.authorization.openshift.io [] [] [create]
buildconfigs.build.openshift.io/instantiate [] [] [create]
buildconfigs.build.openshift.io/instantiatebinary [] [] [create]
builds.build.openshift.io/clone [] [] [create]
imagestreamimports.image.openshift.io [] [] [create]
routes.route.openshift.io/custom-host [] [] [create]
podsecuritypolicyreviews.security.openshift.io [] [] [create]
podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create]
podsecuritypolicysubjectreviews.security.openshift.io [] [] [create]
jenkins.build.openshift.io [] [] [edit view view admin edit view]
builds [] [] [get create delete deletecollection get list patch update watch get list watch]
builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch]
projects [] [] [get delete get delete get patch update]
projects.project.openshift.io [] [] [get delete get delete get patch update]
namespaces [] [] [get get list watch]
pods/attach [] [] [get list watch create delete deletecollection patch update]
pods/exec [] [] [get list watch create delete deletecollection patch update]
pods/portforward [] [] [get list watch create delete deletecollection patch update]
pods/proxy [] [] [get list watch create delete deletecollection patch update]
services/proxy [] [] [get list watch create delete deletecollection patch update]
routes/status [] [] [get list watch update]
routes.route.openshift.io/status [] [] [get list watch update]
appliedclusterresourcequotas [] [] [get list watch]
bindings [] [] [get list watch]
builds/log [] [] [get list watch]
deploymentconfigs/log [] [] [get list watch]
deploymentconfigs/status [] [] [get list watch]
events [] [] [get list watch]
imagestreams/status [] [] [get list watch]
limitranges [] [] [get list watch]
namespaces/status [] [] [get list watch]
pods/log [] [] [get list watch]
pods/status [] [] [get list watch]
replicationcontrollers/status [] [] [get list watch]
resourcequotas/status [] [] [get list watch]
resourcequotas [] [] [get list watch]
resourcequotausages [] [] [get list watch]
rolebindingrestrictions [] [] [get list watch]
deploymentconfigs.apps.openshift.io/log [] [] [get list watch]
deploymentconfigs.apps.openshift.io/status [] [] [get list watch]
controllerrevisions.apps [] [] [get list watch]
rolebindingrestrictions.authorization.openshift.io [] [] [get list watch]
builds.build.openshift.io/log [] [] [get list watch]
imagestreams.image.openshift.io/status [] [] [get list watch]
appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch]
imagestreams/layers [] [] [get update get]
imagestreams.image.openshift.io/layers [] [] [get update get]
builds/details [] [] [update]
builds.build.openshift.io/details [] [] [update]
Name: basic-user
Labels: <none>
Annotations: openshift.io/description: A user that can get basic information about projects.
rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
selfsubjectrulesreviews [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.openshift.io [] [] [create]
clusterroles.rbac.authorization.k8s.io [] [] [get list watch]
clusterroles [] [] [get list]
clusterroles.authorization.openshift.io [] [] [get list]
storageclasses.storage.k8s.io [] [] [get list]
users [] [~] [get]
users.user.openshift.io [] [~] [get]
projects [] [] [list watch]
projects.project.openshift.io [] [] [list watch]
projectrequests [] [] [list]
projectrequests.project.openshift.io [] [] [list]
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]
[*] [] [*]
...
To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles:
$ oc describe clusterrolebinding.rbac
Name: alertmanager-main
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: alertmanager-main
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount alertmanager-main openshift-monitoring
Name: basic-users
Labels: <none>
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: basic-user
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:authenticated
Name: cloud-credential-operator-rolebinding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: cloud-credential-operator-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default openshift-cloud-credential-operator
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:masters
Name: cluster-admins
Labels: <none>
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:cluster-admins
User system:admin
Name: cluster-api-manager-rolebinding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: cluster-api-manager-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default openshift-machine-api
...
View local roles and bindings
To view the local role bindings for a different project, add the -n flag to the command:
$ oc describe rolebinding.rbac
OR
$ oc describe rolebinding.rbac -n joe-project
Name: admin
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: admin
Subjects:
Kind Name Namespace
---- ---- ---------
User kube:admin
Name: system:deployers
Labels: <none>
Annotations: openshift.io/description:
Allows deploymentconfigs in this namespace to rollout pods in
this namespace. It is auto-managed by a controller; remove
subjects to disa...
Role:
Kind: ClusterRole
Name: system:deployer
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount deployer joe-project
Name: system:image-builders
Labels: <none>
Annotations: openshift.io/description:
Allows builds in this namespace to push images to this
namespace. It is auto-managed by a controller; remove subjects
to disable.
Role:
Kind: ClusterRole
Name: system:image-builder
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount builder joe-project
Name: system:image-pullers
Labels: <none>
Annotations: openshift.io/description:
Allows all pods in this namespace to pull images from this
namespace. It is auto-managed by a controller; remove subjects
to disable.
Role:
Kind: ClusterRole
Name: system:image-puller
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:serviceaccounts:joe-project
Add roles to users
To manage roles and bindings, use the 'oc adm' administrator command-line interface. Binding, or adding, a role to users or groups grants the user or group the role's permissions. Roles can be added and removed from individuals and groups using the oc adm policy commands. You can associate any of the default cluster roles with local users or groups in your project.
$ oc adm policy add-role-to-user <role> <user> -n <project>
$ oc adm policy add-role-to-user admin alice -n joe
You can alternatively apply the following YAML to add the role to the user:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: admin-0
namespace: joe
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: alice
View the local role bindings and verify the addition in the output:
$ oc describe rolebinding.rbac -n <project>
To view the local role bindings for the joe project:
$ oc describe rolebinding.rbac -n joe
Name: admin
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: admin
Subjects:
Kind Name Namespace
---- ---- ---------
User kube:admin
Name: admin-0
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: admin
Subjects:
Kind Name Namespace
---- ---- ---------
User alice
Name: system:deployers
Labels: <none>
Annotations: openshift.io/description:
Allows deploymentconfigs in this namespace to rollout pods in
this namespace. It is auto-managed by a controller; remove
subjects to disa...
Role:
Kind: ClusterRole
Name: system:deployer
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount deployer joe
Name: system:image-builders
Labels: <none>
Annotations: openshift.io/description:
Allows builds in this namespace to push images to this
namespace. It is auto-managed by a controller; remove subjects
to disable.
Role:
Kind: ClusterRole
Name: system:image-builder
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount builder joe
Name: system:image-pullers
Labels: <none>
Annotations: openshift.io/description:
Allows all pods in this namespace to pull images from this
namespace. It is auto-managed by a controller; remove subjects
to disable.
Role:
Kind: ClusterRole
Name: system:image-puller
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:serviceaccounts:joe
The alice user has been added to the admins RoleBinding
Create a local role
Creating a local role in Red Hat OpenShift involves using the oc
command-line tool. Here's a step-by-step guide to help you set up a custom local role:
Open your terminal and make sure you are logged into your OpenShift cluster.
Use the
oc create role
command with the following syntax:
oc create role <role_name> --verb=<verb> --resource=<resource> -n <project_name>
- Replace
<role_name>
with the name you want to give your role. - Replace
<verb>
with the action you want to allow (e.g.,get
,list
,create
,update
,delete
). - Replace
<resource>
with the resource type (e.g.,pods
,services
). - Replace
<project_name>
with the name of the project where you want to create the role.
For example, to create a role named view-pods
that allows viewing pods in the myproject
namespace, you would use:
oc create role view-pods --verb=get --verb=list --resource=pods -n myproject
-
Bind the role to a user or group using the
oc create rolebinding
command:
oc create rolebinding <rolebinding_name> --role=<role_name> --user=<username> -n <project_name>
- Replace
<rolebinding_name>
with a name for the role binding. - Replace
<role_name>
with the name of the role you created. - Replace
<username>
with the user you want to bind the role to. - Replace
<project_name>
with the name of the project.
For example, to bind the view-pods
role to a user named johndoe
in the myproject
namespace, you would use:
oc create rolebinding view-pods-binding --role=view-pods --user=johndoe -n myproject
This will create a custom local role and bind it to a specific user in your OpenShift project.
Top comments (0)