If you still want more you can use Helmfile. Take care of your PMs 😁
If you still want more you can use Helmfile. Take care of your PMs 😁
I understand your point. Anyway, if your devs are using Helm they can still use Sops with the helm-secrets plugin. Just create a separated values file (can be named as secrets.yaml) contaning all sensitive values and encrypt it with Sops.
What do you think about storing your encrypted secrets in your repos using Sops?
If an entire region goes down, the Terraform status file stored there will not be useful at all because it only stores information about the resources you deployed in that particular region and your resources deployed there will also go down.
Replicating the status file in another region will not be useful either because it will only contain information about the resources that are down in your region.
The status file inventories all the resources you have deployed to your cloud provider. Basically Terraform uses it to know what resources are being managed by the current Terraform code and to be idempotent.
If you want to set up another region for disaster recovery (Active-Passive) you can use the same Terraform code, but use a different configuration (meaning different tfvars files) to deploy the resources to a different region (not necessarily to another account). Just make sure that all your data is replicated into the passive region.
My apologies if I’m saying something stupid, but I see that this is built on top of Drone, which stopped being Open Source several years ago. Does this means that Drone, as part of Gitness, has become Open Source again?
That has much more sense that I though
I totally agree that. But I suppose that with this rebranding they are looking for moving away the original project as much as possible.
I’m not sure if this movement has a lot of sense right now because by when the project had finally been released it will still being a fork of the original Terraform, but they may change this in a near future.
Hi! I’m afraid there is not a solution that groups all the functionality you that are looking for. Anyway, these are the AWS services I use for most of the requirements you described. Take at count most of them require AWS services and your company will be charged for most of them.
Default blocking for certain CIDRs.
Exceptions for certain IP/Host and port combos within those CIDRs.
Use Security Groups (free cost): https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html
Authentication and authorisation to use said exceptions (i.e. user tracking).
You can implement user Authentication using AWS Cognito: https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html.
Additionally you can delegate the user authentication by using Application Load Balancers and Cognito. See: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-authenticate-users.html
Detailed logging on connections; source, dest, request and response sizes, ports, protocols, whatever we can get out hands on.
All of the above for all (?) kinds of TCP connections (HTTPS, Postgres, Oracle DB, MongoDB, as examples).
For connections through the Load Balancer y suggest you to enable access logs (requires an S3 bucket and will generate additional charges). For the rest of connections you may want to check this but I never tried it.
Hi! After more than 6 years using Ansible I have not found a way to print the standard output of a program running under the command
module, so I’m afraid the only way to achieve this is exactly what you suggest: using a debug
task, something that has always seemed terribly ugly to me.
This is a very interesting approach that we are starting to fully adopt in our organization for our Kubernetes deployments.
We switched from Helm (using Helmfile) to ArgoCD to deploy applications into our clusters.
The main challenge here is how to design a good repository structure to organize the ArgoCD applications because there is nothing said about which is the best approach that must be followed.
Finally we decided to use ApplicationSets to deploy umbrella charts that are defined in the repo. The Chart.yaml
of our umbrellas contain the charts that we really want to deploy as if they were dependencies (such as as Ingress Nginx) and their chart versions and the values.yaml
contains the values for a particular cluster.
Another interesting issue is how we manage secrets. We were using sops along with helm secrets plugin to automatically decrypt secrets when running helmfile apply
. Fortunatelly the helm secrets plugin can be installed as an addon on ArgoCD via an initScript or developing a custom ArgoCD image.
I’m using Librewolf (a Firefox fork) and have the same issue.
Just check Firefox messaging folder exists in your home
ls -l ~/.mozilla/native-messaging-hosts
In my case, I needed to create a symlink to make it work with my browser
ln -s ~/.mozilla/native-messaging-hosts ~/.librewolf/native-messaging-hosts
Maybe you could apply a similar workaround. Hope this helps