elasticsearch - 如何使用cloudformation模板将两个EC2实例(安装AMI创建的Elasticsearch)作为多节点?
问题描述
我需要使用 AMI 创建两个 Ec2 实例,并使用 CloudFormation 模板将其作为多节点。AMI 在其中安装了 elasticsearch。我需要使一个主节点和另一个数据节点。
我的 CF 模板脚本,
AWSTemplateFormatVersion: '2010-09-09'
#Transform: 'AWS::Serverless-2016-10-31'
Description: AWS CloudFormation Template with EC2InstanceWithSecurityGroup
Parameters:
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: must be the name of an existing EC2 KeyPair.
RemoteAccessLocation:
Description: The IP address range that can be used to access to the EC2 instances
Type: String
MinLength: '9'
MaxLength: '18'
Default: 0.0.0.0/0
AllowedPattern: (\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})
ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
Resources:
ES1EC2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.2xlarge
SecurityGroups:
- !Ref 'InstanceSecurityGroup'
KeyName: !Ref 'KeyName'
ImageId: ami-xxxxxxxxxxxxxxxx
#DependsOn: ES2EC2Instance
UserData:
Fn::Base64: !Sub |
#!/bin/bash -ex
cat > /etc/elasticsearch/elasticsearch.yml<<EOF1
network.host: "${EC2_PRIVATE_IP}"
http.port: 9200
http.max_content_length: 1gb
node.name: node-1
node.roles: [ master, data, ingest ]
transport.port: 9300-9400
discovery.seed_hosts: ["${ES1EC2Instance.PrivateIp}", "${ES2EC2Instance.PrivateIp}"]
cluster.initial_master_nodes: ["node-1"]
gateway.recover_after_nodes: 2
EOF1
## Restart Elasticsearch
sudo systemctl restart elasticsearch
ES2EC2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.2xlarge
SecurityGroups:
- !Ref 'InstanceSecurityGroup'
KeyName: !Ref 'KeyName'
ImageId: ami-xxxxxxxxxxxxxxxx
#DependsOn: ES1EC2Instance
DependsOn: ES1EC2Instance
UserData:
Fn::Base64: !Sub |
#!/bin/bash -ex
cat > /etc/elasticsearch/elasticsearch.yml<<EOF1
network.host: "${ES2EC2Instance.PrivateIp}"
http.port: 9200
http.max_content_length: 1gb
node.name: node-2
node.roles: [ data, ingest ]
transport.port: 9300-9400
discovery.seed_hosts: ["${ES1EC2Instance.PrivateIp}", "${ES2EC2Instance.PrivateIp}"]
cluster.initial_master_nodes: ["node-1"]
gateway.recover_after_nodes: 2
EOF1
## Restart Elasticsearch
sudo systemctl restart elasticsearch
InstanceSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Enable SSH (22), HTTP (8080),
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: !Ref 'RemoteAccessLocation'
- CidrIp: 0.0.0.0/0
FromPort: '8080'
IpProtocol: tcp
ToPort: '8080'
- IpProtocol: tcp
FromPort: '9200'
ToPort: '9200'
CidrIp: !Ref 'RemoteAccessLocation'
Outputs:
AZ:
Description: Availability Zone of the newly created EC2 instance for ES
Value: !GetAtt 'ES1EC2Instance.AvailabilityZone'
PublicDNS:
Description: Public DNSName of the newly created EC2 instance for ES
Value: !GetAtt 'ES1EC2Instance.PublicDnsName'
PublicIP:
Description: Public IP address of the newly created EC2 instance for ES
Value: !GetAtt 'ES1EC2Instance.PublicIp'
如何使用 CloudFormation 模板更新 elasticsearch.yaml 以使其成为多节点?
解决方案
尝试将宏添加到您的 Cloudformation 模板。这是一个可以被宏调用的 lambda 示例。此功能使用 SSM RunCommand 向您的实例发送 bash 命令,在这种情况下,实例由自动缩放组过滤,但您可以按标签或任何其他属性过滤您的实例。您还需要向具有 AmazonEC2RoleforSSM 权限的实例添加 IAM 角色。
import json
import boto3
import time
import os
ssm = boto3.client('ssm')
ec2 = boto3.client('ec2')
autoscalingg = boto3.client('autoscaling')
def lambda_handler(event, context):
# TODO implement
env = event['environment']
kafka = event['kafka']
print (env, kafka)
status = check_autoscaling_group(env)
if status:
add_IP(env, kafka)
return {
'body': json.dumps('Function executed, please see the logs!')
}
else:
return {
'body': json.dumps('The function was not executed, please see the logs!')
}
def check_autoscaling_group(env):
response = autoscalingg.describe_auto_scaling_groups(
AutoScalingGroupNames=[
'elasticsearch-master-'+str(env)
]
)
stable = True
for instance in response['AutoScalingGroups'][0]['Instances']:
if instance['LifecycleState'] != "InService":
stable = False
message = "At least 1 master instance is not in Running status, please wait until all the masters nodes are stable."
print (message)
break
return stable
def add_IP(env, kafka):
response = ec2.describe_instances(
Filters=[
{
'Name': 'tag:Name',
'Values': [
'elasticsearch-masters-server-'+str(env)
]
},
{
'Name': 'instance-state-name',
'Values': [
'running'
]
},
{
'Name': 'tag:role',
'Values': [
'master'
]
}
]
)
for instances in response['Reservations']:
id = instances['Instances'][0]['InstanceId']
ip = instances['Instances'][0]['PrivateIpAddress']
#Add ES server IP to Kibana config
command_to_execute = 'sed -i "s/ELASTICSEARCH_HOSTS: http:\/\/.*/ELASTICSEARCH_HOSTS: http:\/\/'+str(ip)+':9200/g" /home/ubuntu/Kibana/docker-compose.yml'
execute_in_master(command_to_execute, id, env)
#Add kafka server IP to logstash conf
kafka = '\\"'+str(kafka)+':9092\\"'
command_to_execute = 'sed -i "s/bootstrap_servers => .*/bootstrap_servers => ['+str(kafka)+']/g" /home/ubuntu/Logstash/config/logstash/pipeline/my_pipeline.conf'
execute_in_master(command_to_execute, id, env)
#Add ES server IP to logstash config
ip = '\\"'+str(ip)+':9200\\"'
command_to_execute = 'sed -i "s/hosts => .*/hosts => '+str(ip)+'/g" /home/ubuntu/Logstash/config/logstash/pipeline/my_pipeline.conf'
execute_in_master(command_to_execute, id, env)
#Restart services
command_to_execute = 'cd /home/ubuntu/Kibana/ && docker-compose up -d'
execute_in_master(command_to_execute, id, env)
command_to_execute = 'cd /home/ubuntu/Logstash/ && docker-compose up -d'
execute_in_master(command_to_execute, id, env)
#Just select one master instance
break
def execute_in_master(command_to_execute, id, env):
print (command_to_execute)
response = ec2.describe_instances(
Filters=[
{
'Name': 'tag:Name',
'Values': [
'elasticsearch-masters-server-'+str(env)
]
},
{
'Name': 'instance-state-name',
'Values': [
'running'
]
},
{
'Name': 'tag:role',
'Values': [
'master'
]
}
]
)
for instances in response['Reservations']:
instance_id = instances['Instances'][0]['InstanceId']
if instance_id == id:
response = ssm.send_command(
InstanceIds=[instance_id],
DocumentName="AWS-RunShellScript",
Parameters={'commands': [command_to_execute]}
)
time.sleep(1)
要在 cloudformation 模板中调用宏,您需要添加以下内容:
ModifyInstances:
Fn::Transform:
Name: MacroSetUpCluster
Parameters:
env: !Ref MyEnv
kafka: !Ref MyKafkaIP
堆栈 MacroSetUpCluster 需要事先使用 lambda 函数进行部署。
推荐阅读
- sql - Oracle LISTAGG 是否与 GROUP BY 和 PIVOT 一起使用?
- css - Vuetify v-hover 不覆盖整个 div。只是图像
- javascript - vuejs 具有一个 lodash 节流功能的多个单文件组件
- c# - 在事件处理程序中创建延迟 - C#
- python-3.x - 重用从 DeviceCodeCredential 获得的令牌
- python - 有组织的 pandas df 列表的字典列表
- javascript - 是共享 WebAssembly.memory、SharedArrayBuffer 吗?
- c++ - 禁用 std::initializer_list 构造函数
- python - Python dict 的 __new__ 方法有问题
- c++ - 如何从 UTC 日期创建 C++11 system_time(线程安全)