OpenStack Yoga版三节点集群完整安装指南

OpenStack Yoga版三节点集群完整安装指南

概述

本指南将详细介绍如何在三个节点上部署OpenStack Yoga版本的云平台。该架构包含一个控制节点、一个计算节点和一个存储节点,适用于中小规模的私有云部署。

架构设计

节点规划

节点类型 主机名 IP地址 主要服务
控制节点 controller 192.168.1.10 Keystone, Glance, Placement, Nova-API, Neutron-Server, Horizon, MySQL, RabbitMQ
计算节点 compute1 192.168.1.11 Nova-Compute, Neutron-Agent, Libvirt, KVM
存储节点 storage1 192.168.1.12 Cinder, Swift, NFS/Ceph后端

网络规划

  • 管理网络: 192.168.1.0/24 (用于内部通信)
  • 外部网络: 10.0.0.0/24 (用于浮动IP和外部访问)
  • 实例网络: 172.16.1.0/24 (虚拟机租户网络)
  • 存储网络: 172.17.1.0/24 (Cinder/Swift后端存储)

环境准备

系统要求

  • Ubuntu 20.04 LTS 或 CentOS 8 Stream
  • 控制节点: 8核CPU, 32GB内存, 200GB硬盘
  • 计算节点: 8核CPU, 32GB内存, 500GB硬盘
  • 存储节点: 4核CPU, 16GB内存, 1TB+硬盘

基础环境配置

在每个节点上执行以下基础配置:

# 1. 更新系统
sudo apt update && sudo apt upgrade -y

# 2. 设置主机名
sudo hostnamectl set-hostname controller  # 在对应节点上替换为compute1或storage1

# 3. 配置hosts文件
sudo tee -a /etc/hosts << EOF
192.168.1.10 controller
192.168.1.11 compute1
192.168.1.12 storage1
EOF

# 4. 安装必要软件
sudo apt install chrony vim wget curl net-tools -y

# 5. 配置时间同步
sudo sed -i 's/^pool/#&/' /etc/chrony/chronyd.conf
sudo sed -i '/^server/i server 192.168.1.1 iburst' /etc/chrony/chronyd.conf
sudo systemctl enable chronyd.service
sudo systemctl start chronyd.service

控制节点安装配置

1. 数据库安装配置

# 安装MariaDB
sudo apt install mariadb-server python3-pymysql -y

# 配置MariaDB
sudo tee /etc/mysql/mariadb.conf.d/99-openstack.cnf << EOF
[mysqld]
bind-address = 192.168.1.10
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF

sudo systemctl enable mysql.service
sudo systemctl start mysql.service

# 安全初始化
mysql_secure_installation

2. 消息队列安装配置

# 安装RabbitMQ
sudo apt install rabbitmq-server -y

# 添加OpenStack用户
sudo rabbitmqctl add_user openstack RABBIT_PASS
sudo rabbitmqctl set_permissions openstack ".*" ".*" ".*"

sudo systemctl enable rabbitmq-server.service
sudo systemctl start rabbitmq-server.service

3. Memcached安装配置

# 安装Memcached
sudo apt install memcached python3-memcache -y

# 配置Memcached
sudo sed -i 's/-l 127.0.0.1/-l 192.168.1.10/g' /etc/memcached.conf

sudo systemctl enable memcached.service
sudo systemctl start memcached.service

4. Keystone身份认证服务安装

# 创建数据库
mysql -u root -p<< EOF
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
FLUSH PRIVILEGES;
EXIT;
EOF

# 安装Keystone
sudo apt install apache2 libapache2-mod-wsgi-py3 python3-openstackclient python3-keystone -y

# 配置Keystone
sudo cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
sudo tee /etc/keystone/keystone.conf << EOF
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone

[token]
provider = fernet
EOF

# 同步数据库
sudo su -s /bin/sh -c "keystone-manage db_sync" keystone

# 初始化Fernet密钥
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

# 引导身份服务
keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

# 配置Apache
echo "ServerName controller" | sudo tee -a /etc/apache2/apache2.conf

# 重启Apache服务
sudo systemctl reload apache2.service

# 配置环境变量
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

5. Glance镜像服务安装

# 创建数据库和用户
mysql -u root -p<< EOF
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
FLUSH PRIVILEGES;
EXIT;
EOF

# 创建Glance用户和服务
openstack user create --domain default --password-prompt glance
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

# 安装Glance
sudo apt install glance -y

# 配置Glance
sudo tee /etc/glance/glance-api.conf << EOF
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
EOF

# 同步数据库
sudo su -s /bin/sh -c "glance-manage db_sync" glance

# 重启服务
sudo systemctl enable glance-api.service
sudo systemctl start glance-api.service

6. Placement服务安装

# 创建数据库和用户
mysql -u root -p<< EOF
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'PLACEMENT_DBPASS';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';
FLUSH PRIVILEGES;
EXIT;
EOF

# 创建Placement用户
openstack user create --domain default --password-prompt placement
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778

# 安装Placement
sudo apt install placement-api -y

# 配置Placement
sudo tee /etc/placement/placement.conf << EOF
[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement

[api]
auth_strategy = keystone

[keystone_authtoken]
service_token_roles_required = true
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
EOF

# 同步数据库
sudo su -s /bin/sh -c "placement-manage db sync" placement

# 重启Apache
sudo systemctl restart apache2

计算节点安装配置

1. 安装Nova计算服务

# 在计算节点上安装Nova
sudo apt install nova-compute -y

# 验证硬件支持
grep -c vmx /proc/cpuinfo  # 应该返回大于0的值

# 配置Nova
sudo tee /etc/nova/nova.conf << EOF
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 192.168.1.11
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
EOF

# 如果你的计算节点支持硬件虚拟化,启用它
sudo sed -i 's/#compute_driver = libvirt.LibvirtDriver/compute_driver = libvirt.LibvirtDriver/' /etc/nova/nova-compute.conf
echo "virt_type = kvm" | sudo tee -a /etc/nova/nova-compute.conf

# 同步数据库
sudo su -s /bin/sh -c "nova-manage api_db sync" nova
sudo su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
sudo su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
sudo su -s /bin/sh -c "nova-manage db sync" nova

# 启动服务
sudo systemctl enable nova-compute.service
sudo systemctl start nova-compute.service

2. 安装Neutron网络服务

# 在计算节点上安装Neutron代理
sudo apt install neutron-openvswitch-agent -y

# 配置Neutron
sudo tee /etc/neutron/neutron.conf << EOF
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = NEUTRON_PASS

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
EOF

# 配置ML2插件
sudo tee /etc/neutron/plugins/ml2/openvswitch_agent.ini << EOF
[ovs]
local_ip = 192.168.1.11
bridge_mappings = provider:br-ex

[securitygroup]
enable_security_group = true
firewall_driver = openvswitch
EOF

# 启动服务
sudo systemctl enable neutron-openvswitch-agent.service
sudo systemctl start neutron-openvswitch-agent.service

存储节点安装配置

1. 安装Cinder块存储服务

# 在存储节点上创建物理卷和卷组
# 假设你有一个额外的磁盘 /dev/sdb 用于存储
sudo pvcreate /dev/sdb
sudo vgcreate cinder-volumes /dev/sdb

# 安装Cinder-volume
sudo apt install cinder-volume -y

# 配置Cinder
sudo tee /etc/cinder/cinder.conf << EOF
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 192.168.1.12
enabled_backends = lvm
glance_api_interface = internal

[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = cinder
password = CINDER_PASS

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = tgtadm
EOF

# 启动服务
sudo systemctl enable tgt.service
sudo systemctl start tgt.service
sudo systemctl enable cinder-volume.service
sudo systemctl start cinder-volume.service

网络配置

1. 控制节点网络配置

# 安装Neutron服务组件
sudo apt install neutron-server neutron-plugin-ml2 neutron-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent -y

# 配置Neutron服务器
sudo tee /etc/neutron/neutron.conf << EOF
[DEFAULT]
core_plugin = ml2
service_plugins = router
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = NEUTRON_PASS

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
EOF

# 配置ML2插件
sudo tee /etc/neutron/plugins/ml2/ml2_conf.ini << EOF
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vlan]
network_vlan_ranges = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true
EOF

# 配置Open vSwitch代理
sudo tee /etc/neutron/plugins/ml2/openvswitch_agent.ini << EOF
[ovs]
local_ip = 192.168.1.10
bridge_mappings = provider:br-ex

[agent]
tunnel_types = vxlan
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = openvswitch
EOF

# 配置L3代理
sudo tee /etc/neutron/l3_agent.ini << EOF
[DEFAULT]
interface_driver = openvswitch
external_network_bridge =
EOF

# 配置DHCP代理
sudo tee /etc/neutron/dhcp_agent.ini << EOF
[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
EOF

# 配置元数据代理
sudo tee /etc/neutron/metadata_agent.ini << EOF
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
EOF

# 创建外部网络桥接
sudo ovs-vsctl add-br br-ex
sudo ovs-vsctl add-port br-ex enp0s8  # 替换为你的外部网络接口
sudo ip addr flush dev enp0s8  # 替换为你的外部网络接口
sudo ip addr add 10.0.0.1/24 dev br-ex
sudo ip link set br-ex up
sudo ip route add default via 10.0.0.1

# 启动服务
sudo systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
sudo systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

# 配置防火墙规则
sudo iptables -I FORWARD -d 172.16.1.0/24 -j ACCEPT
sudo iptables -I FORWARD -s 172.16.1.0/24 -j ACCEPT

Horizon仪表板安装

# 安装Horizon
sudo apt install openstack-dashboard -y

# 配置Horizon
sudo tee /etc/openstack-dashboard/local_settings.py << EOF
import os
from django.utils.translation import gettext_lazy as _

DEBUG = False
ALLOWED_HOSTS = ['*', 'controller', '192.168.1.10']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}

OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"

WEBROOT = '/dashboard'
LOGIN_URL = WEBROOT + '/auth/login/'
LOGOUT_URL = WEBROOT + '/auth/logout/'

COMPRESS_OFFLINE = True
STATIC_ROOT = '/var/www/dashboard/static'
MEDIA_ROOT = os.path.join(STATIC_ROOT, 'media')
EOF

# 重启Apache
sudo systemctl restart apache2

验证安装

1. 导入环境变量

# 创建admin-openrc文件
cat > admin-openrc << EOF
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

# 加载环境变量
source admin-openrc

2. 验证服务

# 验证Keystone
openstack token issue

# 验证Glance
openstack image list

# 验证Nova
openstack compute service list

# 验证Neutron
openstack network agent list

# 验证Cinder
openstack volume service list

常用命令

1. 系统服务管理

# 查看所有服务状态
sudo systemctl status devstack@*

# 重启OpenStack服务
sudo systemctl restart devstack@*

# 查看日志
sudo journalctl -u devstack@* -f

2. 实例管理

# 创建实例
openstack server create --image cirros --flavor m1.tiny --network private my-instance

# 查看实例
openstack server list

# 删除实例
openstack server delete my-instance

故障排除

1. 常见问题

  • 时间不同步: 确保所有节点时间同步,使用NTP服务
  • 网络连通性: 检查防火墙规则和网络配置
  • 数据库连接失败: 检查数据库配置和密码
  • 认证失败: 检查Keystone配置和环境变量

2. 日志位置

  • Keystone: /var/log/apache2/keystone.log
  • Glance: /var/log/glance/
  • Nova: /var/log/nova/
  • Neutron: /var/log/neutron/
  • Cinder: /var/log/cinder/

安全加固

1. TLS配置

为OpenStack服务启用TLS加密通信:

# 在Keystone配置中启用HTTPS
sudo a2enmod ssl
sudo a2ensite default-ssl
sudo systemctl restart apache2

2. 防火墙配置

# 配置UFW防火墙
sudo ufw default deny incoming
sudo ufw default allow outgoing

# 允许OpenStack服务端口
sudo ufw allow 22/tcp      # SSH
sudo ufw allow 80/tcp      # HTTP
sudo ufw allow 443/tcp     # HTTPS
sudo ufw allow 5000/tcp    # Keystone
sudo ufw allow 9292/tcp    # Glance
sudo ufw allow 6080:6082/tcp # VNC
sudo ufw allow 5672/tcp    # RabbitMQ
sudo ufw allow 3306/tcp    # MySQL
sudo ufw allow 9696/tcp    # Neutron
sudo ufw allow 8774/tcp    # Nova
sudo ufw allow 8776/tcp    # Cinder

sudo ufw enable

总结

本指南详细介绍了OpenStack Yoga版本的三节点部署方案。通过遵循此文档,您应该能够成功搭建一个功能完整的OpenStack云平台。在生产环境中部署时,请务必根据实际需求调整资源配置、网络规划和安全策略。

后续步骤建议:

  • 配置备份策略
  • 实施监控解决方案
  • 设置高可用性
  • 制定容量规划

注意: 本指南基于Ubuntu 20.04和OpenStack Yoga版本。在不同环境下部署时,可能需要进行相应调整。

发表回复

后才能评论