CL210红帽OpenStack平台架构--介绍undercloud
时间:2023-08-05 03:37:00
?? 个人简介:大家好,我是 金鱼哥,CSDN华为云·云享受专家,阿里云社区·专家博主
??个人资质:CCNA、HCNP、CSNA(网络分析师)软考初级、中级网络工程师RHCSA、RHCE、RHCA、RHCI、ITIL??
??格言:努力不一定成功,但要想成功,就必须努力。??支持我:可点赞,可收藏,可留言
文章目录
-
-
- ??undercloud主模块有7个
-
- ??Identity Service (keystone)
- ??Image Service (glance)
- ??Compute Service (nova)
- ??Bare Metal Service (ironic)
- ??Orchestration Service (heat)
- ??Object store (swift)
- ??OpenStack Networking Service (neutron)
- ??查看undercloud的标识服务
- ??识别undercloud的网络服务
-
- ??查看供应网络
- ??查看undercloud的网络接口
- ??访问undercloud的镜像服务
- ??识别undercloud裸金属服务
- ??查看undercloud的编排服务
- ??查看undercloud的计算服务
- ??undercloud的电源管理
-
- ??执行IPMI电源管理
- ??课本练习
-
- ??查看配置文件信息
- ??查看端点列表
- ??查看stack环境变量
- ??查看undercloud的网络接口
- ??查看子网信息
- ??检查节点电源状态
- ??查看电源管理信息
- ??总结
-
Red Hat OpenStack Platform (RHOSP) Director是独立的OpenStack一体化安装(all-in-one),它提供了一个完整的安装和管理OpenStack基础设施环境的工具集。它主要基于TripleO项目开发OpenStack部署组件, 该组件是“OpenStack上的OpenStack”的缩写。undercloud使用OpenStack专用一体化安装(undercloud)安装可操作的上操作核心组件OpenStack云(overcloud)。它使用这些核心组件和新组件来定位、提供和部署裸机系统OpenStack 控制、计算、网络和存储节点
Red Hat OpenStack Platform Director是用于OpenStack云部署在基础设施中,其中云的工作负荷是overcloud 系统本身:控制器节点、计算节点和存储节点。底层云可以称为裸金属云,因为基础设施节点通常直接构建在物理硬件系统上。但是,正如你在这门课程中所经历的,undercloud学习、测试和特定用例可以将基础设施部署到虚拟系统中。类似地,overcloud虚拟机和容器几乎只部署,但可以直接将租户的工作负荷部署到特殊平台上。通过合并裸机驱动程序和方法,实现刀片服务器或企业机架系统等物理系统。因此,裸金属云和租户工作负载云只是一个方便的参考框架。
??undercloud主模块有7个
??Identity Service (keystone)
??Image Service (glance)
??Compute Service (nova)
??Bare Metal Service (ironic)
??Orchestration Service (heat)
??Object store (swift)
??OpenStack Networking Service (neutron)
undercloud由Red Hat OpenStack平台director本身组成。再加上网络和资源的配置和管理overcloud的OpenStack节点。当RHOSP Director为overcloud配置为控制器、计算器、网络和存储系统的节点被视为底层云的工作负载。这些节点在部署和配置完成后重新启动overcloud
??查看undercloud的标识服务
当undercloud安装完成后,在用户主目录中创建管理凭证stackrc身份环境文件。undercloud所有用户用户账户所需的密码undercloud-passwords.conf文件中。您必须使用OpenStack CLI客户端源stackrc与底层云服务交互的文件。使用 source stackrc 使用加载相关变量。在下面的输出中,stackrc文件源在undercloud上执行openstack service list命令。
(undercloud) [stack@director ~]$ openstack service list ---------------------------------- ------------------ ------------------------- | ID | Name | Type | ---------------------------------- ------------------ ------------------------- | 0175f01b32e34e048e65480466ca0df1 | placement | placement | | 19e6c0e055fb4f77b5edc9db4c34941b | heat-cfn | cloudformation | | 261998639f5b464fafaadcc0ff4d85d7 | zaqar-websocket | messaging-websocket | | 32f1900fc5104ec0a032eb2f0bbe63b6 | ironic | baremetal | | 70564b321fa349678465e01cc57e69f7 | keystone | identity | | 7193f91efeb44525ae2780420f752c0f | glance | image | | 8c4172704dba426a83a3c3633553df65 | ironic-inspector | baremetal-introspection | | 941164cdcd574ef294fc39b776141855 | heat | orchestration | | 9df43eebd74740c9a5d6ad026b53146b | neutron | network | | d1f5fa857c7e40db88e48b713d28debd | swift | object-store | | e8e21db59b7c47879cfa9c51c780777f | mistral | workflowv2 | | eade3c9269134a528a15f598ca70421b | nova | compute | | f0a5a41b6bbe4c229f13964e47f754d6 | zaqar | messaging | +----------------------------------+------------------+-------------------------+
在教室环境中,stackrc文件中的OS _AUTH_URL环境变量被设置为undercloud上运行的标识服务的公共端点。教室环境对undercloud上的OpenStack服务所有公共端点使用SSL。
(undercloud) [stack@director ~]$ openstack endpoint list -c 'Service Type' -c Interface -c URL
+-------------------------+-----------+----------------------------------------------------+
| Service Type | Interface | URL |
+-------------------------+-----------+----------------------------------------------------+
| cloudformation | internal | http://172.25.249.202:8000/v1/%(tenant_id)s |
| orchestration | admin | http://172.25.249.202:8004/v1/%(tenant_id)s |
| image | internal | http://172.25.249.202:9292 |
| baremetal | public | https://172.25.249.201:13385 |
| cloudformation | public | https://172.25.249.201:13800/v1/%(tenant_id)s |
| identity | public | https://172.25.249.201:13000 |
| messaging | internal | http://172.25.249.202:8888 |
| baremetal | internal | http://172.25.249.202:6385 |
| workflowv2 | internal | http://172.25.249.202:8989/v2 |
| orchestration | internal | http://172.25.249.202:8004/v1/%(tenant_id)s |
| identity | admin | http://172.25.249.202:35357 |
| cloudformation | admin | http://172.25.249.202:8000/v1/%(tenant_id)s |
| network | public | https://172.25.249.201:13696 |
| messaging-websocket | public | wss://172.25.249.201:9000 |
| messaging | public | https://172.25.249.201:13888 |
| placement | public | https://172.25.249.201:13778/placement |
| image | admin | http://172.25.249.202:9292 |
| workflowv2 | admin | http://172.25.249.202:8989/v2 |
| workflowv2 | public | https://172.25.249.201:13989/v2 |
| messaging-websocket | admin | ws://172.25.249.202:9000 |
| compute | internal | http://172.25.249.202:8774/v2.1 |
| baremetal-introspection | public | https://172.25.249.201:13050 |
| baremetal-introspection | admin | http://172.25.249.202:5050 |
| messaging | admin | http://172.25.249.202:8888 |
| compute | public | https://172.25.249.201:13774/v2.1 |
| baremetal | admin | http://172.25.249.202:6385 |
| placement | internal | http://172.25.249.202:8778/placement |
| baremetal-introspection | internal | http://172.25.249.202:5050 |
| placement | admin | http://172.25.249.202:8778/placement |
| object-store | internal | http://172.25.249.202:8080/v1/AUTH_%(tenant_id)s |
| compute | admin | http://172.25.249.202:8774/v2.1 |
| object-store | public | https://172.25.249.201:13808/v1/AUTH_%(tenant_id)s |
| network | internal | http://172.25.249.202:9696 |
| orchestration | public | https://172.25.249.201:13004/v1/%(tenant_id)s |
| messaging-websocket | internal | ws://172.25.249.202:9000 |
| identity | internal | http://172.25.249.202:5000 |
| network | admin | http://172.25.249.202:9696 |
| image | public | https://172.25.249.201:13292 |
| object-store | admin | http://172.25.249.202:8080 |
+-------------------------+-----------+----------------------------------------------------+
📜识别undercloud的网络服务
为了提供云上(overcloud)节点,底层云(undercloud)与一个提供网络相匹配,该网络提供DHCP和PXE引导功能来设置裸金属节点。供应网络是一种大容量的、专用的、隔离的网络,与普通的公网分开。在overcloud部署之后,Red Hat OpenStack Platform Director将继续通过这个隔离的、安全的供应网络管理和更新overcloud,与外部和内部的OpenStack流量完全隔离。
📑查看供应网络
查看为构建undercloud而创建的undercloud.conf文件,以验证配置网络。在下面的输出中,DHCP地址从dhcp_start到dhcp_end的范围是管理供应子网的OpenStack networking dnsmasq服务的作用域。部署到供应网络的节点从这个范围为它们的供应NIC分配一个IP地址。inspection_iprange是裸机dnsmasq服务的作用域,用于在自省过程开始时,在PXE引导期间临时分配地址给被请求的节点。
(undercloud) [stack@director ~]$ cat undercloud.conf | egrep -v '(^#.*|^$)'
[DEFAULT]
local_ip = 172.25.249.200/24
undercloud_public_vip = 172.25.249.201
undercloud_admin_vip = 172.25.249.202
undercloud_ntp_servers = 172.25.254.254
dhcp_start = 172.25.249.51
dhcp_end = 172.25.249.59
inspection_iprange = 172.25.249.150,172.25.249.180
masquerade_network = 172.25.249.0/24
undercloud_service_certificate = /etc/pki/tls/certs/undercloud.pem
generate_service_certificate = false
local_interface = eth0
network_cidr = 172.25.249.0/24
network_gateway = 172.25.249.200
hieradata_override = /home/stack/hieradata.txt
undercloud_debug = false
enable_telemetry = false
enable_ui = true
[auth]
undercloud_admin_password = redhat
[ctlplane-subnet]
📑查看undercloud的网络接口
br-ctlplane网桥是172.25.249.0配置网络。eth1接口是172.25.250.0公共公网。
(undercloud) [stack@director ~]$ ip addr | grep eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 172.25.250.200/24 brd 172.25.250.255 scope global noprefixroute eth1
(undercloud) [stack@director ~]$ ip addr | grep br-ctlplane
6: br-ctlplane: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
inet 172.25.249.200/24 brd 172.25.249.255 scope global br-ctlplane
inet 172.25.249.202/32 scope global br-ctlplane
inet 172.25.249.201/32 scope global br-ctlplane
配置子网是为DHCP准备的。网络服务确认一个dnsmasq实例来管理子网范围。当部署在这个子网上的实例查询dnsmasq以获得一个IP地址和一个默认的网关时,配置DNS nameserver值作为一个DHCP选项。
(undercloud) [stack@director ~]$ openstack subnet show ctlplane-subnet
+-------------------+------------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------------+
| allocation_pools | 172.25.249.51-172.25.249.59 |
| cidr | 172.25.249.0/24 |
| created_at | 2018-10-23T13:02:21Z |
| description | |
| dns_nameservers | 172.25.250.254 |
| enable_dhcp | True |
| gateway_ip | 172.25.249.200 |
| host_routes | destination='169.254.169.254/32', gateway='172.25.249.200' |
| id | 45dce459-6e9d-40dc-a4d5-ef2e91de6ec7 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | ctlplane-subnet |
| network_id | 2c9cee9a-e797-462e-ba76-efaa564b7b7f |
| project_id | f50fbd0341134b97a5a735cca5d6255c |
| revision_number | 1 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2018-10-23T13:45:37Z |
+-------------------+------------------------------------------------------------+
📜访问undercloud的镜像服务
当首次供应裸金属节点时,裸金属服务使用IPMI命令在节点上运行。默认情况下,节点被配置为PXE引导。在PXE引导期间,每个裸金属节点请求一个DHCP地址、一个内核和一个ramdisk镜像来执行一次初始网络引导。这些内核和ramdisk引导镜像是从undercloud镜像服务请求的。下面的输出列出了用于硬件缺省的pxe引导裸金属节点所需的镜像:
(undercloud) [stack@director ~]$ openstack image list
+--------------------------------------+------------------------+--------+
| ID | Name | Status |
+--------------------------------------+------------------------+--------+
| fab32297-d1e2-4598-9e4e-6b02c8982c6f | bm-deploy-kernel | active |
| bc8408a7-3074-4c56-8992-a56637f561e0 | bm-deploy-ramdisk | active |
| ff82c9a3-eead-489f-a862-ca7b2b245a60 | overcloud-full | active |
| 86901767-c252-4711-867e-a06eb52b4bfc | overcloud-full-initrd | active |
| 4b823922-4757-468f-aad4-1e7bf0f585fa | overcloud-full-vmlinuz | active |
+--------------------------------------+------------------------+--------+
在overcloud部署期间使用的裸金属节点设置使用overcloud-full-initrd和overcloud-full-vmlinuz镜像来执行网络引导。然后将overcloud-full镜像复制到overcloud节点的引导磁盘。overcloud-full镜像是一个正在运行的红帽企业Linux系统,已经安装了红帽OpenStack平台和红帽Ceph存储包,但还没有配置。
📜识别undercloud的裸金属服务
裸金属服务请求用于部署overcloud的节点。工作流服务(mistral)管理这个注册任务集,允许多个任务和操作同时发生。在准备构建新的overcloud时。使用bm-deploy-ramdisk和bm-deploy-kernel镜像执行硬件缺省。这两个镜像协同工作,引导预期的裸金属节点,收集节点功能并向undercloud的bare metal服务报告,该服务更新其数据库并使用对象存储服务存储节点信息。
(undercloud) [stack@director ~]$ openstack baremetal node list -c Name -c 'Power State' -c 'Provisioning State'
+-------------+-------------+--------------------+
| Name | Power State | Provisioning State |
+-------------+-------------+--------------------+
| controller0 | power on | active |
| compute0 | power on | active |
| computehci0 | power on | active |
| compute1 | power on | active |
| ceph0 | power on | active |
+-------------+-------------+--------------------+
📜查看undercloud的编排服务
Red Hat OpenStack Platform Director在/usr/share/openstack-tripleo-heat-templates/目录中提供了一套完整的工作overcloud模板,包括许多可选配置环境文件。编排服务通过与资源构建指令和Puppet脚本通信来管理部署流程,Puppet脚本根据分配的部署角色配置云上节点。
(undercloud) [stack@director ~]$ openstack stack list
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
| 4e1047aa-4b3f-4c14-9a90-7eab007adaea | overcloud | f50fbd0341134b97a5a735cca5d6255c | CREATE_COMPLETE | 2018-10-23T13:54:59Z | None |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
(undercloud) [stack@director ~]$ openstack stack list -c 'Stack Name' -c 'Stack Status'
+------------+-----------------+
| Stack Name | Stack Status |
+------------+-----------------+
| overcloud | CREATE_COMPLETE |
+------------+-----------------+
📜查看undercloud的计算服务
为了部署堆栈,编排服务对运行在undercloud上的计算服务进行连续调用。计算服务依赖于裸金属服务,如前所述,裸金属服务创建缺省硬件的目录。对于每个部署具有特定角色的新overcloud节点的请求,compute scheduler会筛选可用节点的列表,确保所选节点满足硬件需求。下面的输出显示了运行在undercloud上的计算服务列表:
(undercloud) [stack@director ~]$ openstack compute service list
+----+----------------+--------------------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+----------------+--------------------------+----------+---------+-------+----------------------------+
| 1 | nova-conductor | director.lab.example.com | internal | enabled | up | 2020-10-13T14:30:55.000000 |
| 2 | nova-scheduler | director.lab.example.com | internal | enabled | up | 2020-10-13T14:31:00.000000 |
| 5 | nova-compute | director.lab.example.com | nova | enabled | up | 2020-10-13T14:30:54.000000 |
+----+----------------+--------------------------+----------+---------+-------+----------------------------+
📜undercloud的电源管理
要部署的节点通常是裸机物理系统,如机架系统上的刀片服务器,具有用于远程关闭电源访问和管理的物理网络管理接口。访问每个节点以验证系统是否具有多个nic,以及为分配的部署角色配置了正确的CPU、RAM和硬盘空间。在本课程中,节点是具有小规模配置的虚拟机。
云环境中的电源管理通常使用内置于服务器机箱中的IPMI管理NIC。但是,虚拟机通常没有lights-out -management平台接口。相反,它们由适当的虚拟化管理软件控制,该软件连接到正在运行的虚拟机的管理程序,以请求电源管理操作和事件。在这个教室中,一个基板管理控制器(BMC)模拟器在power虚拟机上运行,每个虚拟机节点只有一个唯一的IP地址。在正确的侦听器上接收到有效的IPMI请求后,BMC仿真器将请求发送到hypervisor, hypervisor在相应的虚拟机上执行请求。instackenv-initial.json文件定义了要注册的每个裸机节点的MAC地址、IPMI地址、电源管理用户名和密码。该节点注册文件可以是JSON或YAML格式。下面的示例显示了instackenv-initial.json格式的配置文件。
(undercloud) [stack@director ~]$ cat instackenv-initial.json
{
"nodes": [
{
"name": "controller0",
"arch": "x86_64",
"cpu": "2",
"disk": "40",
"memory": "8192",
"mac": [ "52:54:00:00:f9:01" ],
"pm_addr": "172.25.249.101",
"pm_type": "pxe_ipmitool",
"pm_user": "admin",
"pm_password": "password",
"pm_port": "623",
"capabilities": "node:controller0,boot_option:local"
},
{
"name": "compute0",
…………
📑执行IPMI电源管理
power虚拟机就像每个裸金属节点的IPMI硬件层。发送到侦听器的结构正确的IPMI命令由IPMI仿真器转换为针对底层hypervisor系统的请求,后者在被请求的节点上执行操作。
课堂不需要完整的IPMI功能集,只需要按需编程地启动循环或启动节点的能力。用于测试power IPMI仿真功能的命令行实用程序使用以下语法:
(undercloud) [stack@director ~]$ ipmitool -I lanplus -U admin -P password -H power status|on|off
i接口选项被编译到命令中,可以通过ipmitool -h。lanplus选项表示使用IPMI v2.0 RMCP+ LAN接口。
例如,运行以下命令查看controller0节点的电源状态。
(undercloud) [stack@director ~]$ ipmitool -I lanplus -U admin -P password -H 172.25.249.101 power status
Chassis Power is on
OpenStack CLl使用OpenStack baremetal命令与裸金属服务交互,该命令用于对裸金属服务请求的裸金属节点进行操作。使用openstack baremetal node power on命令在注册的裸金属节点上启动。
(undercloud) [stack@director ~]$ openstack baremetal node power on compute0
相对地,关闭命令如下:
(undercloud) [stack@director ~]$ openstack baremetal node power off compute0
📜课本练习
[student@workstation ~]$ lab architecture-undercloud setup
Setting up workstation for lab exercise work:
. Verifying node reachable: director.......................... SUCCESS
📑查看配置文件信息
(undercloud) [stack@director ~]$ grep '^dhcp' undercloud.conf
dhcp_start = 172.25.249.51
dhcp_end = 172.25.249.59
(undercloud) [stack@director ~]$ grep '^undercloud_.*vip' undercloud.conf
undercloud_public_vip = 172.25.249.201
undercloud_admin_vip = 172.25.249.202
(undercloud) [stack@director ~]$ grep '^undercloud_.*password' undercloud.conf
undercloud_admin_password = redhat
📑查看端点列表
(undercloud) [stack@director ~]$ openstack endpoint list -c 'Service Type' -c Interface -c URL
+-------------------------+-----------+----------------------------------------------------+
| Service Type | Interface | URL |
+-------------------------+-----------+----------------------------------------------------+
| cloudformation | internal | http://172.25.249.202:8000/v1/%(tenant_id)s |
| orchestration | admin | http://172.25.249.202:8004/v1/%(tenant_id)s |
| image | internal | http://172.25.249.202:9292 |
| baremetal | public | https://172.25.249.201:13385 |
| cloudformation | public | https://172.25.249.201:13800/v1/%(tenant_id)s |
| identity | public | https://172.25.249.201:13000 |
| messaging | internal | http://172.25.249.202:8888 |
| baremetal | internal | http://172.25.249.202:6385 |
| workflowv2 | internal | http://172.25.249.202:8989/v2 |
| orchestration | internal | http://172.25.249.202:8004/v1/%(tenant_id)s |
| identity | admin | http://172.25.249.202:35357 |
| cloudformation | admin | http://172.25.249.202:8000/v1/%(tenant_id)s |
| network | public | https://172.25.249.201:13696 |
| messaging-websocket | public | wss://172.25.249.201:9000 |
| messaging | public | https://172.25.249.201:13888 |
| placement | public | https://172.25.249.201:13778/placement |
| image | admin | http://172.25.249.202:9292 |
| workflowv2 | admin | http://172.25.249.202:8989/v2 |
| workflowv2 | public | https://172.25.249.201:13989/v2 |
| messaging-websocket | admin | ws://172.25.249.202:9000 |
| compute | internal | http://172.25.249.202:8774/v2.1 |
| baremetal-introspection | public | https://172.25.249.201:13050 |
| baremetal-introspection | admin | http://172.25.249.202:5050 |
| messaging | admin | http://172.25.249.202:8888 |
| compute | public | https://172.25.249.201:13774/v2.1 |
| baremetal | admin | http://172.25.249.202:6385 |
| placement | internal | http://172.25.249.202:8778/placement |
| baremetal-introspection | internal | http://172.25.249.202:5050 |
| placement | admin | http://172.25.249.202:8778/placement |
| object-store | internal | http://172.25.249.202:8080/v1/AUTH_%(tenant_id)s |
| compute | admin | http://172.25.249.202:8774/v2.1 |
| object-store | public | https://172.25.249.201:13808/v1/AUTH_%(tenant_id)s |
| network | internal | http://172.25.249.202:9696 |
| orchestration | public | https://172.25.249.201:13004/v1/%(tenant_id)s |
| messaging-websocket | internal | ws://172.25.249.202:9000 |
| identity | internal | http://172.25.249.202:5000 |
| network | admin | http://172.25.249.202:9696 |
| image | public | https://172.25.249.201:13292 |
| object-store | admin | http://172.25.249.202:8080 |
+-------------------------+-----------+----------------------------------------------------+
📑查看stack环境变量
(undercloud) [stack@director ~]$ env | grep OS_
OS_BAREMETAL_API_VERSION=1.34
OS_USER_DOMAIN_NAME=Default
OS_PROJECT_NAME=admin
OS_IDENTITY_API_VERSION=3
OS_PASSWORD=redhat
OS_AUTH_TYPE=password
PS1=${OS_CLOUDNAME:+($OS_CLOUDNAME)} [\u@\h \W]\$
OS_AUTH_URL=https://172.25.249.201:13000/
OS_USERNAME=admin
OS_NO_CACHE=True
OS_CLOUDNAME=undercloud
OS_PROJECT_DOMAIN_NAME=Default
📑查看undercloud的网络接口
(undercloud) [stack@director ~]$ ip addr | grep -E 'br-ctlplane|eth1' 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 172.25.250.200/24 brd 172.25.250.255 scope global noprefixroute eth1 6: br-ctlplane: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 inet 172.25.249.200/24 brd 172.25.249.255 scope global br-ctlplane inet