Implementation Engineer
Implementation Engineer Duties and Responsibilities
- Understanding requirements from internal consumers about program functionality.
- Perform UAT tests on application with help of test cases and prepare documents for same and coordinate with team to resolve all issues within required timeframe and inform management of any delays.
- Collaborate with development team to design new programs for all client implementation activities and manage all communication with department to resolve all issues and assist implementation analyst to manage all production data.
- Perform research on all client issues and document all findings and implement all technical activities with help of JIRA.
- Assist internal teams to monitor all software implementation lifecycle and assist to track appropriate customization to all software for clients.
- Train technical staff on all OS and software issues and identify all issues in processes and provide solutions for same. Train other team members on processes, procedures, API functionality, and development specifications.
- Supervise/support crossed-functional teams to design, test and deploy to achieve on-time project completion.
- Implement, configure, and debug MySQL, JAVA, Redis, PHP, Node, ActiveMQ setups.
- Monitor and troubleshoot infrastructure utilizing SYSLOG, SNMP and other monitoring software.
- Install, configure, monitor and upgrade applications during installation/upgrade activities.
- Assisting team to identify network issue and help them with respective resolutions.
- Utilize JIRA for issue reporting, status, activity planning, tracking and updating project defects and tasks.
- Managing JIRA and tracking tickets to closure and follow-ups with team members.
- Troubleshoot software issues
- Provide on-call support as necessary
Implementation Engineer Requirements and Qualifications
- Bachelor’s degree in computer science, software engineering, or a related field
- Experience working with
- Linux & Windows Operating system
- Working on shell and bat scripts
- SIP/ISUP based solutions
- deploying / debugging Java, C++ based solutions.
- MySQL to install, backup, update and retrieve data
- Front-end or back-end software development for LINUX
- database management and security a plus
- Very good debugging and analytical skills
- Good Communication skills
About BlacknGreen India
About
Connect with the team
Similar jobs
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events
Position Summary
Cloud Engineer helps to solutionize, enable, migrate and onboard clients to a secure cloud
platform, which offload the heavy lifting for the clients so that they can focus on their own business
value creation.
Job Description
- Assessing existing customer systems and/or cloud environment to determine the best migration approach and supporting tools used
- Build a secure and compliant cloud environment, with a proven enterprise operating model, on-going cost optimisation, and day-to-day infrastructure management
- Provide and implement cloud solutions to reduce operational overhead and risk, and automates common activities, such as change requests, monitoring, patch management, security, and backup services, and provides full-lifecycle services to provision, run, and support their infrastructure
- Collaborate with internal service teams to meet the clients’ needs for their infrastructure and application deployments
- Troubleshoot complex infrastructure deployments, recreate customer issues, and build proof of concept environments that abide by cloud-best-practices & well architecture frameworks
- Apply advanced troubleshooting techniques to provide unique solutions to our customers’ individual needs
- Work on critical, highly complex customer problems that will span across multiple cloud platforms and services
- Identify and drive improvements on process and technical related issues. Act as an escalation point of contact for the clients
- Drive clients meetings & communication during reviews
Requirement:
- Degree in computer science or a similar field.
- At least 2 year of experience in the field of cloud computing.
- Experience with CI/CD systems.
- Strong in Cloud services
- Exposure to AWS/GCP and other cloud-based infrastructure platforms
- Experience with AWS configuration and management – EC2, S3, EBS, ELB, IAM, VPC, RDS, CloudFront etc
- Exposure in architecting, designing, developing, and implementing cloud solutions on AWS platforms or other cloud server such as Azure, Google Cloud
- Proficient in the use and administration of all versions of MS Windows Server
- Experience with Linux and Windows system administration and web server configuration and monitoring
- Solid programming skills in Python, Java, Perl
- Good understanding of software design principles and best practices
- Good knowledge of REST APIs
- Should have hands-on experience in any deployment orchestration tool (Jenkins, Urbancode, Bamboo, etc.)
- Experience with Docker, Kubernetes, and Helm charts
- Hands on experience in Ansible and Git repositories
- Knowledge in Maven / Gradle
- Azure, AWS, and GCP certifications are preferred.
- Troubleshooting and analytical skills.
- Good communication and collaboration skills.
Come Dive In
The DevOps Engineer will execute the tools and processes to enable DevOps.
Engage in and improve the whole lifecycle of services from inception and design through deployment, operation, and refinement to efficiently deliver high-quality solutions. The candidate should bridge the gap between Development and Operational teams, working with the development teams to meet acceptance criteria and gather and document the requirements. Candidates should be able to work in fast-paced,
multi-disciplinary environments.
As An DevOps Engineer, You Will
● Work in a dynamic, agile team environment developing excellent new applications.
● Participate in design decisions, including new technology research and prototyping
● Collaborate closely with other AWS engineers and architects, cloud engineers, support teams, and other stakeholders
● Promote great Kubernetes and AWS platform design and quality
● Innovate new ideas to evolve our applications and processes
● Continuously analyzing and evaluating our systems, products, and methods for potential improvements.
Mandatory Skills:
● Experience on Linux based infrastructure
● Experience in ECS - Amazon services*
● Should have hands-on containerized Services
● Must know about AWS CI/CD pipeline.
● Must know DevOps concepts and Agile principles
● Knowledge of Git, Docker, and Jenkins
● Knowledge of Infrastructure as Code.
● Experience in using Automation Tools
● Must have experience in Test Driven Development environment setup.
● Working knowledge of Docker and Kubernetes
We recognize that asking you to give 100% of yourself daily requires us to show you the love.
PERKS: what can we offer you?
● Bi-Yearly performance audits and appraisals
● The flexibility of working days/hours
● 5 working days/week (Mon to Fri) and added payout for working Saturday
● Recognition and Appreciation
● A plethora of industry exposure and self-growth opportunities
Visit our site: www.cedcoss.com
- Preferred experience in development associated with Kafka or big data technologies understand essential Kafka components like Zookeeper, Brokers, and optimization of Kafka clients applications (Producers & Consumers). -
Experience with Automation of Infrastructure, Testing , DB Deployment Automation, Logging/Monitoring/alerting
- AWS services experience on CloudFormation, ECS, Elastic Container Registry, Pipelines, Cloudwatch, Glue, and other related services.
- AWS Elastic Kubernetes Services (EKS) - Kubernetes and containers managing and auto-scaling -
Good knowledge and hands-on experiences with various AWS services like EC2, RDS, EKS, S3, Lambda, API, Cloudwatch, etc.
- Good and quick with log analysis to perform Root Cause Analysis (RCA) on production deployments and container errors on cloud watch.
Working on ways to automate and improve deployment and release processes.
- High understanding of the Serverless architecture concept. - Good with Deployment automation tools and Investigating to resolve technical issues.
technical issues. - Sound knowledge of APIs, databases, and container-based ETL jobs.
- Planning out projects and being involved in project management decisions. Soft Skills
- Adaptability
- Collaboration with different teams
- Good communication skills
- Team player attitude
Position: DevOps Engineer
Job Description
The candidate should have the following Skills:
- Hands-on experience with DevOps & CICD open source tools (Jenkins, ), including AWS DevOps services (CodePipeline, CloudFormation, etc).
- Experience in building and deploying using Java/Python/Node.js on Cloud infrastructure (Docker or Kubernetes containers or Lambda.)
- Exposure to Cloud operations, releases, and configuration management
- Experience in implementing Non-functional requirements for microservices, including performance, security, compliance, HA and Disaster Recovery.
- Good soft skills, great attitude, and passion for working in a product startup environment
Total Experience of 2-5 years post BE or BTech or MCA in Computer Science Engineering.
Rules & Responsibilities:
- Design, implement and maintain all AWS infrastructure and services within a managed service environment
- Should be able to work on 24 X 7 shifts for support of infrastructure.
- Design, Deploy and maintain enterprise class security, network and systems management applications within an AWS environment
- Design and implement availability, scalability, and performance plans for the AWS managed service environment
- Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability and security
- Manage the production deployment and deployment automation
- Implement process and quality improvements through task automation
- Institute infrastructure as code, security automation and automation or routine maintenance tasks
- Experience with containerization and orchestration tools like docker, Kubernetes
- Build, Deploy and Manage Kubernetes clusters thru automation
- Create and deliver knowledge sharing presentations and documentation for support teams
- Learning on the job and explore new technologies with little supervision
- Work effectively with onsite/offshore teams
Qualifications:
- Must have Bachelor's degree in Computer Science or related field and 4+ years of experience in IT
- Experience in designing, implementing, and maintaining all AWS infrastructure and services
- Design and implement availability, scalability, and performance plans for the AWS managed service environment
- Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability, and security
- Hands-on technical expertise in Security Architecture, automation, integration, and deployment
- Familiarity with compliance & security standards across the enterprise IT landscape
- Extensive experience with Kubernetes and AWS(IAM, Route53, SSM, S3, EFS, EBS, ELB, Lambda, CloudWatch, CloudTrail, SQS, SNS, RDS, Cloud Formation, DynamoDB)
- Solid understanding of AWS IAM Roles and Policies
- Solid Linux experience with a focus on web (Apache Tomcat/Nginx)
- Experience with automation/configuration management using Terraform\Chef\Ansible or similar.
- Understanding of protocols/technologies like Microservices, HTTP/HTTPS, SSL/TLS, LDAP, JDBC, SQL, HTML
- Experience in managing and working with the offshore teams
- Familiarity with CI/CD systems such as Jenkins, GitLab CI
- Scripting experience (Python, Bash, etc.)
- AWS, Kubernetes Certification is preferred
- Ability to work with and influence Engineering teams
The mission of R&D IT Design Infrastructure is to offer a state-of-the-art design environment
for the chip hardware designers. The R&D IT design environment is a complex landscape of EDA Applications, High Performance Compute, and Storage environments - consolidated in five regional datacenters. Over 7,000 chip hardware designers, spread across 40+ locations around the world, use this state-of-the-art design environment to design new chips and drive the innovation of company. The following figures give an idea about the scale: the landscape has more 75,000+ cores, 30+ PBytes of data, and serves 2,000+ CAD applications and versions. The operational service teams are globally organized to cover 24/7 support to the chip hardware design and software design projects.
Since the landscape is really too complex to manage the traditional way, it is our strategy to transform our R&D IT design infrastructure into “software-defined datacenters”. This transformation entails a different way of work and a different mind-set (DevOps, Site Reliability Engineering) to ensure that our IT services are reliable. That’s why we are looking for a DevOps Linux Engineer to strengthen the team that is building a new on-premise software defined virtualization and containerization platform (PaaS) for our IT landscape, so that we can manage it with best practices from software engineering and offer the IT service reliability which is required by our chip hardware design community.
It will be your role to develop and maintain the base Linux OS images that are offered via automation to the customers of the internal (on-premise) cloud platforms.
Your responsibilities as DevOps Linux Engineer:
• Develop and maintain the base RedHat Linux operating system images
• Develop and maintain code to configure and test the base OS image
• Provide input to support the team to design, develop and maintain automation
products with playbooks (YAML) and modules (Python/PowerShell) in tools like
Ansible Tower and Service-Now
• Test and verify the code produced by the team (including your own) to continuously
improve and refactor
• Troubleshoot and solve incidents on the RedHat Linux operating system
• Work actively with other teams to align on the architecture of the PaaS solution
• Keep the Base OS image up2date via patches or make sure patches are available to
the virtual machine owners
• Train team members and others with your extensive automation knowledge
• Work together with ServiceNow developers in your team to provide the best intuitive
end-user experience possible for the virtual machine OS deployments
We are looking for a DevOps engineer/consultant with the following characteristics:
• Master or Bachelor degree
• You are a technical, creative, analytical and open-minded engineer that is eager to
learn and not afraid to take initiative.
• Your favorite t-shirt has “Linux” or “RedHat” printed on it at least once.
• Linux guru: You have great knowledge Linux servers (RedHat), RedHat Satellite 6
and other RedHat products
• Experience in Infrastructure services, e.g., Networking, DNS, LDAP, SMTP
• DevOps mindset: You are a team-player that is eager to develop and maintain cool
products to automate/optimize processes in a complex IT infrastructure and are able
to build and maintain productive working relationships
• You have great English communication skills, both verbally as in writing.
• No issue to work outside business hours to support the platform for critical R&D
Applications
Other competences we value, but are not strictly mandatory:
• Experience with agile development methods, like Scrum, and are convinced of its
power to deliver products with immense (business) value.
• “Security” is your middle name, and you are always challenging yourself and your
colleagues to design and develop new solutions as security tight as possible.
• Being a master in automation and orchestration with tools like Ansible Tower (or
comparable) and feel comfortable with developing new modules in Python or
PowerShell.
• It would be awesome if you are already a true Yoda when it comes to code version
control and branching strategies with Git, and preferably have worked with GitLab
before.
• Experience with automated testing in a CI/CD pipeline with Ansible, Python and tools
like Selenium.
• An enthusiast on cloud platforms like Azure & AWS.
• Background in and affinity with R&D