【云计算】OpenStack云计算平台
OpenStack云计算平台框架搭建
1.先换源
先换成阿里源:
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
2.安装框架
yum -y install centos-release-openstack-train
3.安装客户端
yum -y install python-openstackclient
但是安装客户端的时候大概率会报错,错误代码如下:
Cannot find a valid baseurl for repo: centos-ceph-nautilus/7/x86_64
Cannot find a valid baseurl for repo: centos-nfs-ganesha28/7/x86_64
Cannot find a valid baseurl for repo: centos-openstack-train/7/x86_64
Cannot find a valid baseurl for repo: centos-qemu-ev/7/x86_64
可以去试着修改这四个配置文件
vim /etc/yum.repos.d/CentOS-Ceph-Nautilus.repo
vim /etc/yum.repos.d/CentOS-NFS-Ganesha-28.repo
vim /etc/yum.repos.d/CentOS-OpenStack-train.repo
vim /etc/yum.repos.d/CentOS-QEMU-EV.repo
修改的内容全部都是一样的:
1,注释掉这段内容:
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=virt-kvm-common
2.修改这段内容,为阿里源的网址:
baseurl=http://mirrors.aliyun.com/$contentdir/$releasever/virt/$basearch/kvm-common/
其实只需要修改mirrors.aliyun.com就可,后面不变即可
将错误排处后,即可正常安装:
yum -y install python-openstackclient
安转完成后即可查看OpenStack的版本号是多少了
openstack --version
4.安装OpenStack SELINUX,用来管理SELINUX安全策略
先将本机的安全策略设置成disabled
vim /etc/sysconfig/selinux

然后就安装
yum -y install openstack-selinux
5.安转一下MariaDB数据库
先安装MariaDB数据库的后台服务
yum -y install mariadb-server
然后安装能实现OpenStack与数据库相连的模块
yum -y install python2-PyMySQL
5.1编辑一下配置文件
vim /etc/my.cnf/openstack.cnf
[mysqld]
bind-address = 192.168.10.130
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
设置一下开机自启动,并且启动服务
systemctl enable mariadb
systemctl start mariadb
5.2初始化一下设置:
mysql_secure_installation
Set root password? [Y/n] y 设置新密码
Remove anonymous users? [Y/n] y 移除匿名用户
Disallow root login remotely? [Y/n] n 不允许root用户远程登录
Remove test database and access to it? [Y/n] y 移除测试表
Reload privilege tables now? [Y/n] y 重新价值权限表
5.3登录数据库
mysql -h<数据库服务器地址> -u<用户名> -p<密码>
6.安装RabbitMQ
先安装服务端
yum -y install rabbitmq-server
设置自启动和启动服务器
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server
6.1当你发现启动rabbitmq失败时
Job for rabbitmq-server.service failed because the control process exited with error code. See "systemctl status rabbitmq-server.service" and "journalctl -xe" for details.
解决办法也很简单:
vim /etc/rabbitmq/rabbitmq-env.conf
添加内容:
NODENAME=rabbit@localhost
参考文献:[Job for rabbitmq-server.service failed because the control process exited with error code. See "systemctl status rabbitmq-server.service" and "journalctl -xe" for details. - 低调的小白 - 博客园](https://www.cnblogs.com/wang-yaz/p/14188233.html "Job for rabbitmq-server.service failed because the control process exited with error code. See "systemctl status rabbitmq-server.service" and "journalctl -xe" for details. - 低调的小白 - 博客园")
6.2用户管理
新建用户:
rabbitmqctl add_user 用户名 密码
删除:
rabbitmqctl delete_user 用户名
修改密码:
rabbitmqctl change_password 用户名 新密码
设置权限:
rabbitmqctl set_permissions 用户名 ".*" ".*" ".*"
7.Memcached内存缓存服务安装
先安装内存缓存服务软件
yum -y install memcached
然后再安装接口程序软件
yum -y install python-memcached
安装完之后可在这里编辑他的配置文件
vim /etc/sysconfig/memcached
然后启动服务
systemctl enable memcached
systemctl start memcached
8.etcd分布式键值对存储系统
yum -y install etcd
systemctl enable etcd
systemctl start etcd
9. keystone安装与配置
yum -y install openstack-keystone
yum -y install httpd
yum -y install mod_wsgi
9.1MariaDB配置一下
mysql -uroot -p密码
先创建一个数据库先
create database keystone;
然后授权一下keystone用户权限,使他能本地和远程都能操纵这个刚建的ketstone库
grant all privileges on keystone.* to 'keystone'@'localhost' identified by '密码';
grant all privileges on keystone.* to 'keystone'@'%' identified by '密码';
9.2修改配置文件
vim /etc/keystone/keystone.conf
大致修改两个地方
第一个是在600行
connection = mysql+pymysql://keystone:密码@本机名/数据库名
第二在2475行,将注释去掉

9.3初始化Keystone数据库
su keystone -s /bin/sh -c "keystone-manage db_sync"
9.4Keystone的初始化
9.4.1初始化Fernet密钥库
keystone-manage fernet_setup --keystone-user keystone --keystone- group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
9.4.2初始化用户信息
keystone-manage bootstrap --bootstrap-password 密码 --bootstrap-admin-url http://controller:5000/v3 --bootstrap-internal-url http://controller:5000/v3 --bootstrap-public-url http://controller:5000/v3 --bootstrap-region-id RegionOne
9.4.3配置Web服务
连接Web的配置文件
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
然后修改一下配置文件
vi /etc/httpd/conf/httpd.conf
大概在96行添加一句,controller是本机域名的意思
ServerName controller
启动http服务
systemctl enable httpd
systemctl start httpd
若是启动失败,失败的状态是:
Invalid command 'WSGIDaemonProcess', perhaps misspelled or defined by a modu
失败的原因很简单,就是mod_wsgi没安装,解决办法,那也简单,安装呗
yum -y install mod_wsgi
9.5模拟登录验证
新建一个用于存储身份凭证的文件
vim ~/admin_login
文件中写入
export OS_USERNAME=admin
export OS_PASSWORD=密码
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
source admin_login
export -p
9.6检测Keystone服务
创建于查阅项目列表
openstack project create --domain default project项目名
查看现有项目列表
openstack project list
创建一个名为“user”的角色
openstack role create user
查看角色列表
openstack role list
查看域列表
openstack domain list
查看平台现有的用户
openstack user list
10.Glance的安装与配置
10.1Glance的安装
yum -y install openstack-glance
创建一个Glance的数据库
mysql -uroot -p密码
create database glance;
grant all privileges on glance.* to 'glance'@'localhost' identified by 'luo';
grant all privileges on glance.* to 'glance'@'%' identified by 'luo';
然后配置一下Glance的配置文件,使其能连接数据库、
vim /etc/glance/glance-api.conf
大概在2089行进行修改,使其连接数据库
connection = mysql+pymysql://glance:luo@controller/glance
在4888行进行修改,使其域keystone连接
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
username = glance
password = luo
project_name = project
user_domain_name = Default
project_domain_name = Default
大概在5529行,将注释去掉
flavor = keystone
大概在3408行
stores = file
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
(这里有个小插曲,就是images这个文件夹是不存在的,不知道要不要自己创建、授权、改用户登)
同步数据库
su glance -s /bin/sh -c "glance-manage db_sync"
看是否同步成功可以去数据库glance去看是否有新表即可
10.2Glance的初始化
创建一glance用户
openstack user create --domain default --password luo glance
为用户glance分配“admin”角色
openstack role add --project project --user glance admin
创建名为“glance”类型为“image”的服务
openstack service create --name glance image
接下来就是创建镜像服务端点:
创建公共服务访问的服务端点
openstack endpoint create --region RegionOne glance public http://controller:9292
创建内部组件访问的服务端点
openstack endpoint create --region RegionOne glance internal http://controller:9292
创建Admin用户访问服务端点
openstack endpoint create --region RegionOne glance admin http://controller:9292
设置开机自启动,并开启服务
systemctl enable openstack-glance-api
systemctl start openstack-glance-api
10.3制作镜像
先准备ciiros这个十多M的镜像文件先
这个可以网上下载,这里就不提供了
下载好就可以制作镜像了
openstack image create --file cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirross
制作好后可以查看镜像列表
openstack image list
查看镜像物理文件
ll /var/lib/glance/images/
11.Placement的安装与配置
11.1安装配置
先进行下载
yum -y install openstack-placement-api
创建数据库并进行授权
mysql -uroot -pluo
create database placement;
grant all privileges on placement.* to 'placement'@'localhost' identified by 'luo';
grant all privileges on placement.* to 'placement'@'%' identified by 'luo';
修改placement配置文件
vim /etc/placement/placement.conf
修改内容如下:
[DEFAULT]
[api]
auth_strategy = keystone
[cors]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = luo
[oslo_policy]
[placement]
[placement_database]
connection = mysql+pymysql://placement:luo@controller/placement
[profiler]
现在配置一下Apache的配置文件
vim /etc/httpd/conf.d/00-placement-api.conf
添加一代吗使其能够授权
<Directory /usr/bin>
Require all granted
初始化placement的数据库
首先同步数据库
su placement -s /bin/sh -c "placement-manage db sync"
查看数据库中的刚建的placement既能查看,若多出很多表则说明是创建成功
11.2placement组件的初始化
创建placement用户
openstack user create --domain default --password luo placement
为其分配admin的角色
openstack role add --project project --user placement admin
创建名为“placement”的服务,其类型也是“placement”
openstack service create --name placement placement
创建服务端点:
1,创建公共用户服务端点
openstack endpoint create --region RegionOne placement public http://controller:8778
2,创建内部组件访问的端点
openstack endpoint create --region RegionOne placement internal http://controller:8778
3,创建Admin用户的访问端点
openstack endpoint create --region RegionOne placement admin http://controller:8778
12.Nova的安装与配置
12.1安装与
yum -y install openstack-nova-api
yum -y install openstack-nova-conductor
yum -y install openstack-nova-scheduler
yum -y install openstack-nova-novncproxy
创建数据库并进行授权,这里一共要创建三个数据库“nova_api”、“nova_cell0”、“nova”
mysql -uroot -pluo
create database nova;
create database nova_cell0;
create database nova_api;
grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'luo';
grant all privileges on nova_api.* to 'nova'@'%' identified by 'luo';
grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by 'luo';
grant all privileges on nova_cell0.* to 'nova'@'%' identified by 'luo';
grant all privileges on nova.* to 'nova'@'localhost' identified by 'luo';
grant all privileges on nova.* to 'nova'@'%' identified by 'luo';
先备份一下配置文件
cp /etc/nova/nova.conf /etc/nova/nova.bak
grep -Ev '^$|#' /etc/nova/nova.bak > /etc/nova/nova.conf
修改placement配置文件,大概要修改9个地方
vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://rabbitmq:luo@controller:5672
my_ip = 192.168.10.130
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:luo@controller/nova_api
[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
connection = mysql+pymysql://nova:luo@controller/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = nova
password = luo
[libvirt]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = luo
region_name = RegionOne
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = my_ip server_proxyclient_address = my_ip
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
先初始化nova_api数据库
su nova -s /bin/sh -c "nova-manage api_db sync"
创建“cell1”单元,该单元使用nova的数据库
su nova -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1"
映射nova到cell0数据库,使cell0的表结构和nova的表结构保持一致
su nova -s /bin/sh -c "nova-manage cell_v2 map_cell0"
初始化nova数据可,由于映射的存在“cell0”中同时会创建相同的数据表
su nova -s /bin/sh -c "nova-manage db sync"
12.2nova的初始化
创建nova用户
openstack user create --domain default --password luo nova
分配Admin的角色
openstack role add --project project --user nova admin
创建服务端点
openstack service create --name nova compute
openstack endpoint create --region RegionOne nova public http://controller:8774/v2.1
openstack endpoint create --region RegionOne nova internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
然后启动服务
systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
12.3计算节点安装nova服务
yum -y install openstack-nova-compute
若是发现安装到一半出问题了,大概率是一些包冲突了,把包删掉即可了,这是我的解决办法,可参考,具体问题具体分析:
yum -y remove libvirt-client*
修改一下nova的配置文件
cp /etc/nova/nova.conf /etc/nova/nova.bak
grep -Ev "^$|#" /etc/nova/nova.bak > /etc/nova/nova.conf
vim /etc/nova/nova.conf
修改的地方大概有8个地方
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://rabbitmq:luo@controller:5672
my_ip = 192.168.10.135
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = nova
password = luo
[libvirt]virt_type = qemu
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = luo
region_name = RegionOne
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.10.135:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
启动nova服务
systemctl enable libvirtd openstack-nova-compute
systemctl start libvirtd openstack-nova-compute
若是启动nova-computer时报错
Job for openstack-nova-compute.service failed because the control process exited with error code. See "systemctl status openstack-nova-compute.service" and "journalctl -xe" for details.
我这里报错的原因是,控制节点的rabbitmq账号没有创建
用这个命令查看是否创建了
rabbitmqctl list_users
其实也可以修改结算节点的配置文件这里

但我还是循规蹈矩一点吧,在控制节点这里创建用户吧
rabbitmqctl add_user rabbitmq luo
rabbitmqctl set_permissions rabbitmq ".*" ".*" ".*"
第一句是创建用户,第二句是赋予全部权限
12.4控制点的设置
搜索发现新的计算节点
su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose"
然后其实能设置每隔多少秒能自动扫描发现新的节点(这里设置是60秒),注意OpenStack中可以有多个计算节点存在
vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval=60
配置保存后,重启服务
systemctl stop openstack-nova-api.service
systemctl start openstack-nova-api.service
验证nova服务,都在控制节点上进行
1.查看计算服务列表
openstack compute service list
2.查看所有OpenStack服务及端点列表:
openstack catalog list
3.使用nova状态检测工具进行检测
nova-status upgrade check
