

About Profectus Analytics Pvt. Ltd.
About
Profectus Solutions is a retail pricing product company that provides business-focused and technology-enabled solutions to retailers to help them solve challenging and high-value problems in the pricing domain and improve business results and exceed their financial goals.
Built for retailers by retail experts, our solutions are designed to not only understand and appreciate the business challenges and nuances of the retail world but also instill a collaborative and work-flow driven approach towards pricing that is enabled by cutting-edge techniques in big data optimization, Machine Learning and Artificial Intelligence.
Connect with the team
Company social profiles
Similar jobs
Only apply on this link - https://loginext.hire.trakstar.com/jobs/fk025uh?source=" target="_blank">https://loginext.hire.trakstar.com/jobs/fk025uh?source=
LogiNext is looking for a technically savvy and passionate Associate Vice President - Product Engineering - DevOps or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.
Responsibilities:
- Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud
- Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems
- Automate the deployment and configuration of the virtualized infrastructure and the entire software stack
- Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO
- Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud
- Define and build processes to identify performance bottlenecks and scaling pitfalls
- Manage robust monitoring and alerting infrastructure
- Explore new tools to improve development operations to automate daily tasks
- Ensure High Availability and Auto-failover with minimum or no manual interventions
Requirements:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- 11 to 14 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Strong background in Linux/Unix Administration and Python/Shell Scripting
- Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
- Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in query analysis, peformance tuning, database redesigning,
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills.
- Excellent leadership skill.

We are now seeking a talented and motivated individual to contribute to our product in the Cloud data
protection space. Ability to clearly comprehend customer needs in a cloud environment, excellent
troubleshooting skills, and the ability to focus on problem resolution until completion are a requirement.
Responsibilities Include:
Review proposed feature requirements
Create test plan and test cases
Analyze performance, diagnosis, and troubleshooting
Enter and track defects
Interact with customers, partners, and development teams
Researching customer issues and product initiatives
Provide input for service documentation
Required Skills:
Bachelor's degree in Computer Science, Information Systems or related discipline
3+ years' experience inclusive of Software as a Service and/or DevOps engineering experience
Experience with AWS services like VPC, EC2, RDS, SES, ECS, Lambda, S3, ELB
Experience with technologies such as REST, Angular, Messaging, Databases, etc.
Strong troubleshooting skills and issue isolation skills
Possess excellent communication skills (written and verbal English)
Must be able to work as an individual contributor within a team
Ability to think outside the box
Experience in configuring infrastructure
Knowledge of CI / CD
Desirable skills:
Programming skills in scripting languages (e.g., python, bash)
Knowledge of Linux administration
Knowledge of testing tools/frameworks: TestNG, Selenium, etc
Knowledge of Identity and Security
Q2 is seeking a team-focused Lead Release Engineer with a passion for managing releases to ensure we release quality software developed using Agile Scrum methodology. Working within the Development team, the Release Manager will work in a fast-paced environment with Development, Test Engineering, IT, Product Management, Design, Implementations, Support and other internal teams to drive efficiencies, transparency, quality and predictability in our software delivery pipeline.
RESPONSIBILITIES:
- Provide leadership on cross-functional development focused software release process.
- Management of the product release cycle to new and existing clients including the build release process and any hotfix releases
- Support end-to-end process for production issue resolution including impact analysis of the issue, identifying the client impacts, tracking the fix through dev/testing and deploying the fix in various production branches.
- Work with engineering team to understand impacts of branches and code merges.
- Identify, communicate, and mitigate release delivery risks.
- Measure and monitor progress to ensure product features are delivered on time.
- Lead recurring release reporting/status meetings to include discussion around release scope, risks and challenges.
- Responsible for planning, monitoring, executing, and implementing the software release strategy.
- Establish completeness criteria for release of successfully tested software component and their dependencies to gate the delivery of releases to Implementation groups
- Serve as a liaison between business units to guarantee smooth and timely delivery of software packages to our Implementations and Support teams
- Create and analyze operational trends and data used for decision making, root cause analysis and performance measurement.
- Build partnerships, work collaboratively, and communicate effectively to achieve shared objectives.
- Make Improvements to processes to improve the experience and delivery for internal and external customers.
- Responsible for ensuring that all security, availability, confidentiality and privacy policies and controls are adhered to.
EXPERIENCE AND KNOWLEDGE:
- Bachelor’s degree in Computer Science, or related field or equivalent experience.
- Minimum 4 years related experience in product release management role.
- Excellent understanding of software delivery lifecycle.
- Technical Background with experience in common Scrum and Agile practices preferred.
- Deep knowledge of software development processes, CI/CD pipelines and Agile Methodology
- Experience with tools like Jenkins, Bitbucket, Jira and Confluence.
- Familiarity with enterprise software deployment architecture and methodologies.
- Proven ability in building effective partnership with diverse groups in multiple locations/environments
- Ability to convey technical concepts to business-oriented teams.
- Capable of assessing and communicating risks and mitigations while managing ambiguity.
- Experience managing customer and internal expectations while understanding the organizational and customer impact.
- Strong organizational, process, leadership, and collaboration skills.
- Strong verbal, written, and interpersonal skills.
- Develop and Deploy Software:
- Architect and create an effective build and release process using industry best practices and tools
- Create and manage build scripts to deploy software in a multi-cloud environment
- Look for opportunities to automate as much of the deployment process as possible to provide for repeatability, auditability, scalability and build in process enforcement
- Manage Release Schedule:
- Act as a “gate keeper” for all releases into production
- Work closely with business stakeholders, development managers and developers to prepare a release schedule
- Help prioritize deployment requests for version upgrades, patches and hot-fixes
- Continuous Delivery of Software:
- Implement Continuous Integration (CI) practices to drive development teams to implement smaller changes and commit code to the version control repo frequently
- Implement Continuous Development (CD) practices that automates deployment of the application to several environments – Dev, Test and Production
- Implement Continuous Testing (functional and non-functional) to execute tests in the CI/CD pipeline
- Manage Version Control:
- Define and implement branching policies to efficiently manage source-code
- Implement business rules as a part of source control standards
- Resolve Software Issues:
- Assist technical support and development teams to troubleshoot issues and identify areas that need improvement
- Address deployment related issues
- Maintain Release Documentation:
- Maintain release notes (features available in stable versions and known issues) and other documents for both internal and external end users
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
BlueOptima’s vision is to become the global reference for the optimisation of the performance of Software Engineers across all industries. We provide industry-leading objective metrics in software development. We enable large organisations to deliver better software, faster and at lower cost, with technology that pushes the limits of what has been done before.
We are a global company which has consistently doubled in headcount and revenue YoY, with no external investment. We currently are located in 4 countries: London (our HQ), Mexico, India and the US. A total number of 250+ employees (and increasing every day) from 34 different nationalities and with over 25 languages spoken.
We promote an open-minded environment and encourage our employees to create their own success story in this high-performance environment.
Location: Bangalore
Department: DevOps
Job Summary:
We are looking for skilled and talented engineers to join our Platform team and directly contribute to Continuous Delivery, and improve the state of art in CI/CD and Observability within BlueOptima.
As a Senior DevOps Engineer, you will define and outline CI/CD related aspects and collaborate with application teams on imparting training and enforcing best practices to follow for CI/CD and also directly implement, maintain, and consult on the observability and monitoring framework that supports the needs of multiple internal stakeholders.
Your team: The Platform team in BlueOptima works across Product lines and is responsible for providing a scalable technology platform which is used by the Product team to build their application, improve performance of it, or even improve the SDLC by improving the application delivery pipeline, etc.
Platform team is also responsible for driving technology adoption across the product development team. The team works on components that are common across product lines like IAM (Identity & Access Management), Auto Scaling, APM (Application Performance Monitoring) and CI/CD, etc
Responsibilities and tasks:
- Define & Outline of CI/CD and related aspects
- Own & Improve state of build process to reduce manual intervention
- Own & Improve state of deployment to make it 100% automated
- Define guidelines and standards of automated testing required for a good CI/CD pipeline, ensures alignment on an ongoing basis (includes artifacts generation, promotions, etc)
- Automating Deployment and Roll back into Production Environment.
- Collaborate with engineering teams, application developers, management and infrastructure teams to assess near- and long-term monitoring needs and provide them with Tooling to improve observability of application in production.
- Keep an eye on the emerging observability tools, trends and methodologies, and continuously enhance our existing systems and processes.
- Ability to choose the right set of tools for a given problem and apply that to all the applications which are available
- Collaborate with the application team for following
- Define and enforce logging standard
- Define metrics applications should track and provide support to application teams visualise same on Grafana (or similar tools)
- Define alerts for application health monitoring in Production
- Tooling like APM, E2E, etc
- Continuously improve the state of the art on above
- Assist in scheduling and hosting regular tool training sessions to better enable tool adoption and best practices, also making sure training materials are maintained.
Qualifications
What You Need to Succeed at BlueOptima:
- Minimum bachelor's degree in Computer Science or equivalent
- Demonstrable years of experience with implementation, operations, maintenance of IT systems and/or administration of software functions in multi-platform and multi-system environments.
- At least 1 year of experience leading or mentoring a small team.
- Demonstrable experience having developed containerized application components, using docker or similar solutions in previous roles
- Have extensive experience with metrics and logging libraries and aggregators, data analysis and visualization tools.
- Experience in defining, creating, and supporting monitoring dashboards
- 2+ Years of Experience with CI tools and building pipelines using Jenkins.
- 2 + Years of Experience with monitoring and observability tools and methodology of products such as; Grafana, Prometheus, ElasticSearch, Splunk, AppDynamics, Dynatrace, Nagios, Graphite ,Datadog etc.
- Ability to write and read simple scripts using Python / Shell Scripts.
- Familiarity with configuration languages such as Ansible.
- Ability to work autonomously with minimum supervision
- Demonstrate strong oral and written communication skill
Additional information
Why join our team?
Culture and Growth:
- Global team with a creative, innovative and welcoming mindset.
- Rapid career growth and opportunity to be an outstanding and visible contributor to the company's success.
- Freedom to create your own success story in a high-performance environment.
- Training programs and Personal Development Plans for each employee
Benefits:
- 32 days of holidays - this includes public and religious holidays
- Contributions to your Provident Fund which can be matched by the company above the statutory minimum as agreed
- Private Medical Insurance provided by the company
- Gratuity payments
- Claim Mobile/Internet expenses and Professional Development costs
- Leave Travel Allowance
- Flexible Work from Home policy - 2 days home p/w
- International travel opportunities
- Global annual meet up (most recent meetups have been held in Cancun and India Thailand, Oct 2022.
- High quality equipment (Ergonomic chairs and 32’ screens)
- Pet friendly offices
- Creche Policy for working parents.
- Paternity and Maternity leave.
Stay connected with us on https://www.linkedin.com/company/blueoptima">LinkedIn or keep an eye on our https://www.blueoptima.com/careers">career page for future opportunities!
Hands on experience in:
- Deploying, managing, securing and patching enterprise applications on large scale in Cloud preferably AWS.
- Experience leading End-to-end DevOps projects with modern tools encompassing both Applications and Infrastructure
- AWS Code deploy, Code build, Jenkins, Sonarqube.
- Incident management and root cause analysis.
- Strong understanding of immutable infrastructure and infrastructure as code concepts. Participate in capacity planning and provisioning of new resources. Importing already deployed infra into IaaC.
- Utilizing AWS cloud services such as EC2, S3, IAM, Route53, RDS, VPC, NAT/IG Gateway, LAMBDA, Load Balancers, CloudWatch, API Gateway are some of them.
- AWS ECS managing multi cluster container environments (ECS with EC2 and Fargate with service discovery using Route53)
- Monitoring/analytics tools like Nagios/DataDog and logging tools like LogStash/SumoLogic
- Simple Notification Service (SNS)
- Version Control System: Git, Gitlab, Bitbucket
- Participate in Security Audit of Cloud Infrastructure.
- Exceptional documentation and communication skills.
- Ready to work in Shift
- Knowledge of Akamai is Plus.
- Microsoft Azure is Plus
- Adobe AEM is plus.
- AWS Certified DevOps Professional is plus
Designation - DevOps Engineer
Urgently required. (NP of maximum 15 days)
Location:- Mumbai
Experience:- 3-5 years.
Package Offered:- Rs.4,00,000/- to Rs.7,00,000/- pa.
DevOps Engineer Job Description:-
Responsibilities
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer & client experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements
- Work experience as a DevOps Engineer or similar software engineering role
- Work experience on AWS
- Good knowledge of Ruby or Python
- Working knowledge of databases and SQL
- Problem-solving attitude
- Team spirit
- BSc in Computer Science, Engineering or relevant field
- 7+ years of experience in System Administration, Networking, Automation, Monitoring
- Excellent problem solving, analytical skills and technical troubleshooting skills
- Experience managing systems deployed in public cloud platforms (Microsoft Azure, AWS or Google Cloud)
- Experience implementing and maintaining CI/CD pipelines (Jenkins, Concourse, etc.)
- Linux experience, flavours: Ubuntu, Redhat, CentOS (sysadmin, bash scripting)
- Experience setting up monitoring (Datadog, Splunk, etc.)
- Experience in Infrastructure Automation tools like Terraform
- Experience in Package Manager for Kubernetes like Helm Charts
- Experience with databases and data storage (Oracle, MongoDB, Postgres SQL, ELK stack)
- Experience with Docker
- Experience with orchestration technologies (Kubernetes or DC/OS)
- Familiar with Agile Software Development
Must-Have’s:
- Hands-on DevOps (Git, Ansible, Terraform, Jenkins, Python/Ruby)
Job Description:
- Knowledge on what is a DevOps CI/CD Pipeline
- Understanding of version control systems like Git, including branching and merging strategies
- Knowledge of what is continuous delivery and integration tools like Jenkins, Github
- Knowledge developing code using Ruby or Python and Java or PHP
- Knowledge writing Unix Shell (bash, ksh) scripts
- Knowledge of what is automation/configuration management using Ansible, Terraform, Chef or Puppet
- Experience and willingness to keep learning in a Linux environment
- Ability to provide after-hours support as needed for emergency or urgent situations
Nice to have’s:
- Proficient with container based products like docker and Kubernetes
- Excellent communication skills (verbal and written)
- Able to work in a team and be a team player
- Knowledge of PHP, MySQL, Apache and other open source software
- BA/BS in computer science or similar









