13+ Docker Jobs in Jaipur | Docker Job openings in Jaipur
Apply to 13+ Docker Jobs in Jaipur on CutShort.io. Explore the latest Docker Job opportunities across top companies like Google, Amazon & Adobe.
Key Responsibilities
• As a part of the DevOps team, you will be responsible for the configuration, optimization, documentation, and support of the CI/CD components.
• Creating and managing build and release pipelines with Azure DevOps and Jenkins.
• Assist in planning and reviewing application architecture and design to promote an efficient deployment process.
• Troubleshoot server performance issues & handle the continuous integration system.
• Automate infrastructure provisioning using ARM Templates and Terraform.
• Monitor and Support deployment, Cloud-based and On-premises Infrastructure.
• Diagnose and develop root-cause solutions for failures and performance issues in the production environment.
• Deploy and manage Infrastructure for production applications
• Configure security best practices for application and infrastructure
Essential Requirements
• Good hands-on experience with cloud platforms like Azure, AWS & GCP. (Preferably Azure)
• Strong knowledge of CI/CD principles.
• Strong work experience with CI/CD implementation tools like Azure DevOps, Team City, Octopus Deploy, AWS Code Deploy, and Jenkins.
• Experience of writing automation scripts with PowerShell, Bash, Python, etc.
• GitHub, JIRA, Confluence, and Continuous Integration (CI) system.
• Understanding of secure DevOps practices
Good to Have -
•Knowledge of scripting languages such as PowerShell, Bash
• Experience with project management and workflow tools such as Agile, Jira, Scrum/Kanban, etc.
• Experience with Build technologies and cloud services. (Jenkins, TeamCity, Azure DevOps, Bamboo, AWS Code Deploy)
• Strong communication skills and ability to explain protocol and processes with team and management.
• Must be able to handle multiple tasks and adapt to a constantly changing environment.
• Must have a good understanding of SDLC.
• Knowledge of Linux, Windows server, Monitoring tools, and Shell scripting.
• Self-motivated; demonstrating the ability to achieve in technologies with minimal supervision.
• Organized, flexible, and analytical ability to solve problems creatively
at Altimetrik
Location: Chennai, Pune,Banglore,jaipurExp: 5 yrs to 8 yrs
- Implement best practices for the engineering team across code hygiene, overall architecture design, testing, and deployment activities
- Drive technical decisions for building data pipelines, data lakes, and analyst access.
- Act as a leader within the engineering team, providing support and mentorship for teammates across functions
- Bachelor’s Degree in Computer Science or equivalent job experience
- Experienced developer in large data environments
- Experience using Git productively in a team environment
- Experience with Docker
- Experience with Amazon Web Services
- Ability to sit with business or technical SMEs to listen, learn and propose technical solutions to business problems
· Experience using and adapting to new technologies
· Take and understand business requirements and goals
· Work collaboratively with project managers and stakeholders to make sure that all aspects of the project are delivered as planned
· Strong SQL skills with MySQL or PostgreSQL
- Experience with non-relational databases and their role in web architectures desired
Knowledge and Experience:
- Good experience with Elixir and functional programming a plus
- Several years of python experience
- Excellent analytical and problem-solving skills
- Excellent organizational skills
Proven verbal and written cross-department and customer communication skills
at Altimetrik
DevOps Architect
Experience: 10 - 12+ year relevant experience on DevOps
Locations : Bangalore, Chennai, Pune, Hyderabad, Jaipur.
Qualification:
• Bachelors or advanced degree in Computer science, Software engineering or equivalent is required.
• Certifications in specific areas are desired
Technical Skillset: Skills Proficiency level
- Build tools (Ant or Maven) - Expert
- CI/CD tool (Jenkins or Github CI/CD) - Expert
- Cloud DevOps (AWS CodeBuild, CodeDeploy, Code Pipeline etc) or Azure DevOps. - Expert
- Infrastructure As Code (Terraform, Helm charts etc.) - Expert
- Containerization (Docker, Docker Registry) - Expert
- Scripting (linux) - Expert
- Cluster deployment (Kubernetes) & maintenance - Expert
- Programming (Java) - Intermediate
- Application Types for DevOps (Streaming like Spark, Kafka, Big data like Hadoop etc) - Expert
- Artifactory (JFrog) - Expert
- Monitoring & Reporting (Prometheus, Grafana, PagerDuty etc.) - Expert
- Ansible, MySQL, PostgreSQL - Intermediate
• Source Control (like Git, Bitbucket, Svn, VSTS etc)
• Continuous Integration (like Jenkins, Bamboo, VSTS )
• Infrastructure Automation (like Puppet, Chef, Ansible)
• Deployment Automation & Orchestration (like Jenkins, VSTS, Octopus Deploy)
• Container Concepts (Docker)
• Orchestration (Kubernetes, Mesos, Swarm)
• Cloud (like AWS, Azure, GoogleCloud, Openstack)
Roles and Responsibilities
• DevOps architect should automate the process with proper tools.
• Developing appropriate DevOps channels throughout the organization.
• Evaluating, implementing and streamlining DevOps practices.
• Establishing a continuous build environment to accelerate software deployment and development processes.
• Engineering general and effective processes.
• Helping operation and developers teams to solve their problems.
• Supervising, Examining and Handling technical operations.
• Providing a DevOps Process and Operations.
• Capacity to handle teams with leadership attitude.
• Must possess excellent automation skills and the ability to drive initiatives to automate processes.
• Building strong cross-functional leadership skills and working together with the operations and engineering teams to make sure that systems are scalable and secure.
• Excellent knowledge of software development and software testing methodologies along with configuration management practices in Unix and Linux-based environment.
• Possess sound knowledge of cloud-based environments.
• Experience in handling automated deployment CI/CD tools.
• Must possess excellent knowledge of infrastructure automation tools (Ansible, Chef, and Puppet).
• Hand on experience in working with Amazon Web Services (AWS).
• Must have strong expertise in operating Linux/Unix environments and scripting languages like Python, Perl, and Shell.
• Ability to review deployment and delivery pipelines i.e., implement initiatives to minimize chances of failure, identify bottlenecks and troubleshoot issues.
• Previous experience in implementing continuous delivery and DevOps solutions.
• Experience in designing and building solutions to move data and process it.
• Must possess expertise in any of the coding languages depending on the nature of the job.
• Experience with containers and container orchestration tools (AKS, EKS, OpenShift, Kubernetes, etc)
• Experience with version control systems a must (GIT an advantage)
• Belief in "Infrastructure as a Code"(IaaC), including experience with open-source tools such as terraform
• Treats best practices for security as a requirement, not an afterthought
• Extensive experience with version control systems like GitLab and their use in release management, branching, merging, and integration strategies
• Experience working with Agile software development methodologies
• Proven ability to work on cross-functional Agile teams
• Mentor other engineers in best practices to improve their skills
• Creating suitable DevOps channels across the organization.
• Designing efficient practices.
• Delivering comprehensive best practices.
• Managing and reviewing technical operations.
• Ability to work independently and as part of a team.
• Exceptional communication skills, be knowledgeable about the latest industry trends, and highly innovative
- At least 5 year of experience in Cloud technologies-AWS and Azure and developing.
- Experience in implementing DevOps practices and DevOps-tools in areas like CI/CD using Jenkins environment automation, and release automation, virtualization, infra as a code or metrics tracking.
- Hands on experience in DevOps tools configuration in different environments.
- Strong knowledge of working with DevOps design patterns, processes and best practices
- Hand-on experience in Setting up Build pipelines.
- Prior working experience in system administration or architecture in Windows or Linux.
- Must have experience in GIT (BitBucket, GitHub, GitLab)
- Hands-on experience on Jenkins pipeline scripting.
- Hands-on knowledge in one scripting language (Nant, Perl, Python, Shell or PowerShell)
- Configuration level skills in tools like SonarQube (or similar tools) and Artifactory.
- Expertise on Virtual Infrastructure (VMWare or VirtualBox or QEMU or KVM or Vagrant) and environment automation/provisioning using SaltStack/Ansible/Puppet/Chef
- Deploying, automating, maintaining and managing Azure cloud based production systems including monitoring capacity.
- Good to have experience in migrating code repositories from one source control to another.
- Hands-on experience in Docker container and orchestration based deployments like Kubernetes, Service Fabric, Docker swarm.
- Must have good communication skills and problem solving skills
Skills Required:
- Good experience with programming language Python
- Strong experience in Docker.
- Good knowledge with any of the Cloud Platform like Azure.
- Must be comfortable working in a Linux environment.
- Must have exposure into IOT domain and its protocols ((Zigbee & BLE ,LoRa,Modbus)
- Must be a good team player.
- Strong Communication Skills
Years of Experience – 2-3 years
Location – Flexible (Pune/Jaipur Preferred), India
Position Summary
At Clarista.io, we are driven to create a connected data world for enterprises, empowering their employees with the information they need to compete in the digital economy. Information is power, but only if it can be harnessed by people.
Clarista turns current enterprise data silos into a ‘Live Data Network’, easy to use, always available, with flexibility to create any analytics with controls to ensure quality and security of the information
Clarista is designed with business teams in mind, hence ensuring performance with large datasets and a superior user experience are critical to the success of the product
What You'll Do
You will be part of our data platform & data engineering team. As part of this agile team, you will work in our cloud native environment and perform following activities to support core product development and client specific projects:
• You will develop the core engineering frameworks for an advanced self-service data analytics product.
• You will work with multiple types of data storage technologies such as relational, blobs, key-value stores, document databases and streaming data sources.
• You will work with latest technologies for data federation with MPP (Massive Parallel Processing) capabilities
• Your work will entail backend architecture to enable product capabilities, data modeling, data queries for UI functionality, data processing for client specific needs and API development for both back-end and front-end data interfaces.
• You will build real-time monitoring dashboards and alerting systems.
• You will integrate our product with other data products through APIs
• You will partner with other team members in understanding the functional / nonfunctional\ business requirements, and translate them into software development tasks
• You will follow the software development best practices in ensuring that the code architecture and quality of code written by you is of high standard, as expected from an enterprise software
• You will be a proactive contributor to team and project discussions
Who you are
• Strong education track record - Bachelors or an advanced degree in Computer Science or a related engineering discipline from Indian Institute of Technology or equivalent premium institute.
• 2-3 years of experience in Big Data and Data Engineering.
• Strong knowledge of advanced SQL, data federation and distributed architectures
• Excellent Python programming skills. Familiarity with Scala and Java are highly preferred
• Strong knowledge and experience in modern and distributed data stack
components such as the Spark, Hive, airflow, Kubernetes, docker etc.
• Experience with cloud environments (AWS, Azure) and native cloud technologies for data storage and data processing
• Experience with relational SQL and NoSQL databases, including Postgres, Blobs, MongoDB etc.
• Experience with data pipeline and workflow management tools: Airflow, Dataflow, Dataproc etc.
• Experience with Big Data processing and performance optimization
• Should know how to write modular and optimized code.
• Should have good knowledge around error handling.
• Fair understanding of responsive design and cross-browser compatibility issues.
• Experience versioning control systems such as GIT
• Strong problem solving and communication skills.
• Self-starter, continuous learner.
Good to have some exposure to
• Start-up experience is highly preferred
• Exposure to any Business Intelligence (BI) tools like Tableau, Dundas, Power BI etc.
• Agile software development methodologies.
• Working in multi-functional, multi-location teams
What You'll Love About Us – Do ask us about these!
• Be an integral part of the founding team. You will work directly with the founder
• Work Life Balance. You can't do a good job if your job is all you do!
• Prepare for the Future. Academy – we are all learners; we are all teachers!
• Diversity & Inclusion. HeForShe!
• Internal Mobility. Grow with us!
• Business knowledge of multiple sectors
Hiring for one of the product based org for PAN India
Skills required:
Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects.
Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must.
Strong knowledge and understanding of Docker & Kubernetes is a must.
Strong knowledge of Python, along with one more language (Shell, Groovy, or Java).
Strong prior experience using automation tools like Ansible, Terraform.
Architect systems, infrastructure & platforms using Cloud Services.
Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.
Benefits of working with OpsTree Solutions:
Opportunity to work on the latest cutting edge tools/technologies in DevOps
Knowledge focused work culture
Collaboration with very enthusiastic DevOps experts
High growth trajectory
Opportunity to work with big shots in the IT industry
Leading IT MNC Company
Rules & Responsibilities:
- Design, implement and maintain all AWS infrastructure and services within a managed service environment
- Should be able to work on 24 X 7 shifts for support of infrastructure.
- Design, Deploy and maintain enterprise class security, network and systems management applications within an AWS environment
- Design and implement availability, scalability, and performance plans for the AWS managed service environment
- Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability and security
- Manage the production deployment and deployment automation
- Implement process and quality improvements through task automation
- Institute infrastructure as code, security automation and automation or routine maintenance tasks
- Experience with containerization and orchestration tools like docker, Kubernetes
- Build, Deploy and Manage Kubernetes clusters thru automation
- Create and deliver knowledge sharing presentations and documentation for support teams
- Learning on the job and explore new technologies with little supervision
- Work effectively with onsite/offshore teams
Qualifications:
- Must have Bachelor's degree in Computer Science or related field and 4+ years of experience in IT
- Experience in designing, implementing, and maintaining all AWS infrastructure and services
- Design and implement availability, scalability, and performance plans for the AWS managed service environment
- Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability, and security
- Hands-on technical expertise in Security Architecture, automation, integration, and deployment
- Familiarity with compliance & security standards across the enterprise IT landscape
- Extensive experience with Kubernetes and AWS(IAM, Route53, SSM, S3, EFS, EBS, ELB, Lambda, CloudWatch, CloudTrail, SQS, SNS, RDS, Cloud Formation, DynamoDB)
- Solid understanding of AWS IAM Roles and Policies
- Solid Linux experience with a focus on web (Apache Tomcat/Nginx)
- Experience with automation/configuration management using Terraform\Chef\Ansible or similar.
- Understanding of protocols/technologies like Microservices, HTTP/HTTPS, SSL/TLS, LDAP, JDBC, SQL, HTML
- Experience in managing and working with the offshore teams
- Familiarity with CI/CD systems such as Jenkins, GitLab CI
- Scripting experience (Python, Bash, etc.)
- AWS, Kubernetes Certification is preferred
- Ability to work with and influence Engineering teams
MTX Group Inc. is seeking a motivated DevOps Engineer to join our team. MTX Group Inc is a global cloud implementation partner that enables organizations to become a fit enterprise through digital transformation and strategy. MTX is powered by the Maverick.io Artificial Intelligence platform and has a strong presence in the Public Sector providing proprietary designs and innovative concept accelerators around licensing and permitting, inspections, grants management, case management, and program management. MTX is a strategic partner with Salesforce with specialty expertise in Einstein Analytics, Mulesoft, Customer Community, Commerce Cloud, and Marketing Cloud. MTX is a Google Cloud partner helping accelerate digital transformation programs across federal, state, and local government agencies.
The DevOps role is responsible for maintaining infrastructure and both development and operational deployments in multiple cloud environments for MTX Group, Inc. and their clients. This role adheres to and promotes MTX Group, Inc’s company’s values by performing respective duties in a manner that supports and contributes to the achievement of MTX Group, Inc’s company’s goals.
Responsibilities:
- Develop and manage tools and services to be used by the organization and by external users of the platform
- Automate all operational and repetitive tasks to improve efficiency and productivity of all development teams
- Research and propose new solutions to improve the the mavQ platform in aspects of speed, scalability and security
- Automate and manage the cloud infrastructure of the organization distribute across the globe and across multiple cloud providers such as Google Cloud and AWS
- Ensure thorough logging, monitoring and alerting for all services and code running in the organization
- Work with development teams to communications and protocols for distributes microservices
- Help development teams debug devops related issues
- Manage CI/CD, Source Control and IAM for the organization
What you will bring:
- Bachelor’s Degree or equivalent
- 4+ years of experience as a DevOps Engineer OR
- 2+ years of experience as backend developer and 2+ years of experience as DevOps or Systems engineer
- Hands on experience with Docker and Kubernetes
- Thorough understanding of operating systems and networking
- Theoretical and practical understanding of Infrastructure-as-code and Platform-as-a-service concepts
- Ability to understand and work with any service, tool or API as needed
- Ability to understand implementation of open source products and modify them if necessary
- Ability to visualize large scale distributed systems and debug issues or make changes to said systems
- Understanding and practical experience in managing CI/CD
What we offer:
- A competitive salary on par with top market standards
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others
***********************
Purpose of the Role
To design and implement the Peak AI System - a new system of intelligence that allows companies to quickly harness the power of AI.
The Opportunity
Peak is a Decision Intelligence company - we are on a mission to help organisations use AI to make great commercial decisions, all the time. Just as importantly, we are also focused on building an amazing company - one where we truly value our people & culture, and strive to make an amazing and diverse place to work. Our recent Best Companies award & 3 star accreditation for being one of the top companies to work for is a testament to this.
We have ambitious plans over the coming years; to launch and lead a new category of technology (Decision Intelligence), expand our operations and create the best working culture possible. This is a great time to join Peak and the Engineering team, as we start the next stage of our global growth.
The Role
Based in Jaipur or Pune, you will be working in a collaborative team on cutting edge technologies in a supportive and dynamic environment. Ultimately you are responsible for building the CODI and on-boarding new clients - this involves:
- Developing a good understanding of the solutions which Peak delivers, and how these link to Peak’s overall strategy.
- Making suggestions towards shaping the strategy for a feature and engineering design.
- Managing own workload and usually delivering unsupervised. Accountable for their own workstream or the work of a small team.
- Understanding Engineering priorities and is able to focus on these, helping others to remain focussed too
- Acting as the Lead Engineer on a project. Helps ensure others follow Peak processes, such as release and version control.
- An active member of the team, through useful contributions to projects and in team meetings.
- Supervising others. Deputising for a Lead and/or support them with tasks. Mentoring new joiners/interns and Masters students. Sharing knowledge and learnings with the team.
Required Skills and Experience
We are building a team of world class engineers in Jaipur / Pune, essentially we are looking for bright, talented engineers looking to work at the cutting edge of practical AI.
- Acquired strong proven professional programming experience.
- Strong command of Algorithms, Data structures, Design patterns, and Product Architectural Design.
- Good understanding of DevOps, Cloud technologies, CI/CD, Serverless and Docker, preferable AWS
- Proven track record and expert in one of the field - DevOps/Frontend/Backend
- Excellent coding and debugging skills in any language with command on any one programming paradigm, preferred Javascript/Python/Go
- Experience with at least one of the Database systems - RDBMS and NoSQL
- Ability to document requirements and specifications.
- A naturally inquisitive and problem-solving mindset.
- Strong experience in using AGILE or SCRUM techniques to build quality software.
- Advantage: experience in React js, AWS, Nodejs, Golang, Apache Spark, ETL tool, data integration system, certification in AWS, worked in a Product company and involved in making it from scratch, Good communication skills, open-source contributions, proven competitive coding pro
As well as doing great work we have created an award-winning, fun and exciting workplace that people love to be, we are looking for people to join us who share our values and are:
- Open - Always up for new ideas and able to take and give feedback in a positive way.
- Driven - sets high goals, doesn’t give up, and make sacrifices to ensure that their job gets done on time and meets/exceeds expectations.
- Curious - Aware of new technologies and uses them to make new improvements in the Engineering ecosystem.
- Smart - Innovative and thinks out of the box, in difficult situations finds a way to succeed no matter what
- Responsible - takes ownership of tasks given and has a strong work ethic.
About Peak
In an age when becoming AI and data-driven is one of the most important things businesses must do, it can also be one of the most challenging. That’s where Peak comes in; our CODI system sits at the heart of our client’s businesses, enabling the rapid unification, modelling and - most importantly - use of data - helping decision makers make great commercial decisions, powered by AI. All supported by our world-class data science team.
Founded in 2014, Peak has grown rapidly, in line with the world’s fastest growing SaaS companies, winning numerous awards and attracted significant funding to support the company’s ongoing investment in machine learning and AI technologies. All to further our mission to become the world’s leading AI System business.
Headquartered in Manchester, Peak also has offices in London, Edinburgh, Jaipur and Brisbane. Our clients include some of the world's leading retailers, manufacturers and well-known brands alongside highly innovative and tech-savvy businesses. Peak is an Amazon Web Services (AWS) Partner, and holds Machine Learning Competency and Retail Competency status.
Job Location: Jaipur
Experience Required: Minimum 3 years
About the role:
As a DevOps Engineer for Punchh, you will be working with our developers, SRE, and DevOps teams implementing our next generation infrastructure. We are looking for a self-motivated, responsible, team player who love designing systems that scale. Punchh provides a rich engineering environment where you can be creative, learn new technologies, solve engineering problems, all while delivering business objectives. The DevOps culture here is one with immense trust and responsibility. You will be given the opportunity to make an impact as there are no silos here.
Responsibilities:
- Deliver SLA and business objectives through whole lifecycle design of services through inception to implementation.
- Ensuring availability, performance, security, and scalability of AWS production systems
- Scale our systems and services through continuous integration, infrastructure as code, and gradual refactoring in an agile environment.
- Maintain services once a project is live by monitoring and measuring availability, latency, and overall system and application health.
- Write and maintain software that runs the infrastructure that powers the Loyalty and Data platform for some of the world’s largest brands.
- 24x7 in shifts on call for Level 2 and higher escalations
- Respond to incidents and write blameless RCA’s/postmortems
- Implement and practice proper security controls and processes
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on platform.
Must have:
- Minimum 3 Years of Experience in DevOps.
- BS degree in Computer Science, Mathematics, Engineering, or equivalent practical experience.
- Strong inter-personal skills.
- Must have experience in CI/CD tooling such as Jenkins, CircleCI, TravisCI
- Must have experience in Docker, Kubernetes, Amazon ECS or Mesos
- Experience in code development in at least one high-level programming language fromthis list: python, ruby, golang, groovy
- Proficient in shell scripting, and most importantly, know when to stop scripting and start developing.
- Experience in creation of highly automated infrastructures with any Configuration Management tools like: Terraform, Cloudformation or Ansible.
- In-depth knowledge of the Linux operating system and administration.
- Production experience with a major cloud provider such Amazon AWS.
- Knowledge of web server technologies such as Nginx or Apache.
- Knowledge of Redis, Memcache, or one of the many in-memory data stores.
- Experience with various load balancing technologies such as Amazon ALB/ELB, HA Proxy, F5.
- Comfortable with large-scale, highly-available distributed systems.
Good to have:
- Understanding of Web Standards (REST, SOAP APIs, OWASP, HTTP, TLS)
- Production experience with Hashicorp products such as Vault or Consul
- Expertise in designing, analyzing troubleshooting large-scale distributed systems.
- Experience in an PCI environment
- Experience with Big Data distributions from Cloudera, MapR, or Hortonworks
- Experience maintaining and scaling database applications
- Knowledge of fundamental systems engineering principles such as CAP Theorem, Concurrency Control, etc.
- Understanding of the network fundamentals: OSI, TCI/IP, topologies, etc.
- Understanding of Auditing of Infrastructure and help org. to control Infrastructure costs.
- Experience in Kafka, RabbitMQ or any messaging bus.
AnnexLogic System Pvt. Ltd. is a leading comprehensive business software product development services provider. We partner with both start-ups and established businesses globally in their idea-to-implementation journey. We are a team of professionals who believe in creating business value for our clients through in-depth research, analysis, formation and implementation. Our customers are spread across the globe including the US, Canada, Australia, UK, Europe and Asia-Pacific region.
We are always looking for opportunities to extend our team with new proficient and talented colleagues who love working in a start-up culture, and this time, we need a Senior Back-end Developer, with strong technical skills and relevant experience of at least 4 years.
You’re a motivated team player who takes initiative and pride in his work and contributes to an informal working environment. You will be confident in your ability to develop the product, taking into account the following:
● Excellent knowledge of NodeJS
● At least 3 years of working experience with backend technologies
● Knowledge of PHP is considered a plus
● Knowledge of Java or Kotlin is considered a plus
● You can learn quickly new technologies and adapt to new technical challenges
● Experience in working with a TDD approach
● You have a strong background in Linux-based development
● You have previous experience with container technologies such as Docker
● You have strong knowledge of microservice architecture
● You are passionate about writing clean, maintainable & testable code
● You have strong Agile/Scrum work ethic
● You are continuously researching tools, technologies, frameworks and practices.
● Projects with a high complexity interest you
● You are highly committed and responsible, independent and able to take an initiative
● You are always mindful of quality attributes like maintainability, performance, security, scalability, usability and testability.
● You carefully read the specifications and assignments and you like to take responsibility for your components from start to end
● You feel comfortable discussing and deepening technical and technical requirements with the PM
● You have multitasking capabilities, attention to detail and excellent analytical skills
● Excellent interpersonal and teamwork skills 1 Responsibilities:
● Work on multiple microservices - written in NodeJS (TS, JS), Spring (Kotlin), PHP (Symfony)
● Develop new microservices to accommodate the (many) new features and requirements of the project
● Refactor existing microservices or extract new ones to improve performance and scalability
● Come up with ideas to improve existing architecture - feedback and ideas are always welcome
● Monitor, trace and debug production microservices to ensure an application with 0 downtime
● Work to improve the CI / CD flow of the project Things we can guarantee:
● Competitive salary, with many other extra benefits (complex medical package, extra free days, events, gifts and others)
● Opportunity to work on a challenging project, using the latest technologies
● The perfect framework for professional development
● A young, creative and friendly team
● A flexible work schedule which makes you more productive
● We support any opportunities that are useful for you, including training and events
- Produce detailed specifications
- Troubleshoot, test and maintain the core product software and databases to ensure strong optimization and functionality
- Contribute in all phases of the development lifecycle
- Follow industry best practices
- Develop and deploy new features to facilitate related procedures and tools if necessary
Requirements :
- Expert in Python with at least one Python web framework (such as Flask, Django etc)
- Hands-on experience on building and leveraging third party Restful APIs
- Strong knowledge in algorithms and data structures
- Experience in any one of the python libraries such as Pandas, Scikit, Numpy, BeautifulSoup, and Scrapy
- Hands-on experience in Linux platform is a must
- Understanding of design principles behind a scalable application
- Knowledge in relational and NoSQL databases
- Exposure to Big Data and Real-time analytics is a plus
- Knowledge of Machine learning or Natural language processing in Python is a plus
- Experience using a public cloud such as Amazon AWS