An experienced and hands-on Technical Architect to lead our Video analytics & Surveillance product
• An ideal candidate would have worked in large scale video platforms (Youtube, Netflix, Hotstar, etc) or Surveillance softwares
• As a Technical Architect, you are hands-on and also a top contributor to the product development
• Leading teams under time-sensitive projects
• Expert level Python programming language skills is a MUST
• Hands-on experience with Deep Learning & Machine learning projects is a MUST
• Has to experience in design and development of products
• Review code & mentor team in improving the quality and efficiency of the delivery
• Ability to troubleshoot and address complex technical problems.
• Has to be a quick learner & ability to adapt to increasing customer demands
• Hands-on experience in design and deploying large scale docker and Kubernetes
• Can lead a technically strong team in sharpening the product further
• Strong design capability with microservices-based architecture and its pitfalls
• Should have worked in large scale data processing systems
• Good understanding of DevOps processes
• Familiar with Identity management, Authorization & Authentication frameworks
• Possesses very strong Software Design, enterprise networking systems, advanced problem-solving skills
• Experience writing technical architecture documents
About Pivotchain Solution
MUST AWS Solution Architect Professional Certification
MUST 4+ years development experience in either Python, PHP, Go, Ruby -
Nice to have Understanding of “TDD” (Test driven development
MUST 4+ years experience with continuous integration tools (Buildkite,
CircleCI, Jenkins, Bamboo)
MUST 4+ years experience with observability tools (One of NewRelic,
DataDog, Grafana, CloudWatch, similar)
MUST 4+ years experience with logging tools (one of SumoLogic, Scalyr,
Splunk, ElasticSearch, Elk, similar)
MUST 4+ years experience with configuration-as-code tools (one of
Terraform, CloudFormation, Ansible, Puppet, Chef)
MUST 2+ years experience with Docker containers / 12 factor
MUST 1+ years experience with Kubernetes
Site Reliability Engineer (SRE)
Vonage Engineering Mission: Vonage is the emerging leader in the $100B+ cloud communications platform (CPaaS) market.
Customers like Airbnb, Viber, Whatsapp, Snapchat, and many others depend on our APIs and SDKs to connect with their customers all over the world. As businesses continue to shift to a real-time, customer-centric communications model, we are experiencing a time of impressive growth.
Why this role matters:
Vonage, a leader in cloud communications, is looking to build a new SRE team in Bangalore.
We believe that there shouldn’t be walls between operations and development and we have embraced the DevOps movement.
As a Site Reliability Engineer, you will work as part of the development team to build automation and tools to deploy, monitor and maintain the platform's health, targeted SLO and SLAs.
What you'll do
● Lead the effort in ensuring reliability of the platform.
● Create Software and Tooling that improves performance, stability, and reliability of the
● Ability to work as part of a Development Team.
● Monitor Application Metrics to help with improving software performance.
● Build solutions that are highly resilient, scalable, and secure.
● Have a wide breadth of knowledge from software, infrastructure, and security.
● Adopt best practices and champion an engineering culture emphasizing Agile.
What's required for application
● Proven experience building, supporting, and architecting high-availability cloud
● Experience working on monitoring, logging. and alerting solutions and used tools.
● Experience with tooling such as Terraform, Ansible, Docker, Kubernetes, and Chef.
● Fluent and comfortable working with Cloud Infrastructure.
● Ability to read, write, and troubleshoot software code.
● Good understanding of CI/CD tools.
● Champion of devsecops using tools such as Hashicorp Vault, KMS, Secrets Manager,
● Experience with software development, algorithms, data structures, and systems design.
● Understand monitoring tools such as DataDog, ELK, and Grafana.
● Bachelor's degree (or higher) in Computer Science and/or related
Nice to have, but not required
● Working knowledge on other AWS services like Glacier, Elastic Container Service (ECS),
● Elastic MapReduce (EMR), DynamoDB etc.
● Automation and Orchestration tools such as Jenkins
● Ruby or Java development skills
● Data Pipeline knowledge, especially with tools like MapReduce, Kafka and ELK stack
Phenom People is the leader in Talent Experience Marketing (TXM for short.) We’re an early-stage startup on a mission to fundamentally transform how companies acquire talent. As a category creator, our goals are two-fold: to educate talent acquisition and HR leaders on the benefits of TXM and to help solve their recruiting pain points.
- Designed and Implement configuration management pipeline system with Jenkins and Chef CLI
- Design and implement CI and CD pipeline for application build and deployment process from inception
- Design and manage automated community Docker build pipelines
- Automation of developer / DevOps flows to maximize developer productivity
- Reported build and release status and documented all policies and procedures
- Expert in Docker – Build and Administration
- Expert in Chef
- Able to make OO-Programming
- Expert in GiT management and Build tools - Maven, SBT and Ant(optional)
- Expert in Scripting (Bash and Python)
- Expert in Linux Operating Systems
- Hands-on Experience in AWS
- Hands-on experience Architecting End to End Solutions
- Competitive salary for a startup
- Gain experience rapidly
- Work directly with the executive team
- Fast-paced work environment
The Incident Response (IR) Lead manages a team of experts with diverse skill setsincluding Security Operations Center (SOC), Forensics, and technical Subject Matter Expert (SME) advisory. The IR Lead is specifically tasked with managing all aspects of an Incident Response engagement to include incident validation, monitoring, containment, log analysis, system forensic analysis, and reporting. The Incident Response Lead is also responsible for building the relationship with the client and client’s counsel and ensuring the engagement’s objectives and expectations are met and executed successfully as documented in the statement of work. You will leverage a solid foundation of technical expertise in Cybersecurity, Incident Response, and Digital Forensics to successfully execute your responsibilities.
ROLES AND RESPONSIBILITIES
· Accurately collects information from the client concerning the incident to include but not be limited to the client’s environment, size, technology, and security threats. In addition, the IR Lead is responsible for capturing all client’s expectations and objectives throughout the engagement to ensure successful delivery.
· The main point of contact manages and participates in all communications with the client and the client’s counsel during the engagement. The IR Lead sets the cadence for communications.
· Management and Coordination of all technical efforts for the IR engagement to drive the process forward through; tool deployment, ransomware decryption, restoration, and recovery efforts, system rebuilds, system, application, and network administration tasks.
· Coordinates with the Ransom Specialist when ransom negotiations are needed. Ensures updates regarding ransom status are delivered to the client and counsel in a timely fashion.
· Manages and coordinates the onsite efforts with the Onsite Lead or team ensuring they understand and can execute the objectives for the onsite work. Additional responsibilities with onsite efforts include ensuring communications are frequent and getting the daily onsite update communicating these back to the IR Director and/or IR Ops Associate for their Tiger Team.
· Ensures the Forensic Lead is coordinating the collection of data necessary for the investigation.
· Ensures SentinelOne is deployed on time and adding value.
· Communicates with sales when appropriate for SentinelOne, provide client contact.
· Communicates in tandem with the Forensic Lead pertinent findings to the client during the investigation.
· Follows up with the SOC Lead on SentinelOne alerts and encourages/coordinates client participation with the product.
· Accountable for final report review, ensuring the report is accurate, professional, and meets the objective of client counsel.
· Other duties as assigned.
DISCLAIMER The above statements are intended to describe the general nature and level of work being performed. They are not intended to be an exhaustive list of all responsibilities, duties, and skills required personnel so classified.
Role Description : Skills & Knowledge
1. Experience leading scoping calls
2. Strong background and practical hands-on experience with Windows or Linux System and Network Administration, Security DevOps, Incident Response and Digital Forensics, or Security Engineering
3. Practical experience performing in a functional role including but not limited to one or more of the following disciplines: computer forensics, Incident Response, data analytics, Security Operations, and Engineering, Digital Investigations
4. Possesses strong verbal and written communication skills
· Bachelor's degree in Computer Science, Computer Engineering, Information Assurance, Forensic Sciences, or related technical field; Graduate degree preferred
· 10+ years experience leading full-cycle incident response investigations and communicating with the client/counsel/carriers
· Must be eligible to work in the US without sponsorship
WORK ENVIRONMENT While performing the responsibilities of this position, the work environment characteristics listed below are representative of the environment the employee will encounter: Usual office working conditions. Reasonable accommodations may be made to enable people with disabilities to perform the essential functions of this job.
· No physical exertion is required.
· Travel within or outside of the state.
· Light work: Exerting up to 20 pounds of force occasionally, and/or up-to 10 pounds of force as frequently as needed to move objects.
We are seeking a Security Program Manager to effectively drive Privacy & Security Programs in collaboration with cross functional teams. You will partner with engineering leadership, product management and development teams to deliver more secure products.
Roles & Responsibilities:
- Work with multiple stakeholders across various departments such as IT, Engineering, Business, Legal, Finance etc to implement controls defined in policies and processes.
- Manage projects with security and audit requirements with internal and external teams and serve as a liaison among all stakeholders.
- Managing penetration tests and security reviews for core applications and APIs.
- Identify, create and guide on privacy and security requirements considering applicable Data Protection Laws and implement them across software modules developed at Netmeds.
- Brainstorm with engineering teams to figure out how privacy and security controls can be applied to Netmeds tech stack.
- Coordination with Infra Teams and Dev Teams on DB and application hardening, standardization of server images / containerization.
- Assess vendors' security posture before onboarding them and after they qualify, review their security posture at a set frequency.
- Manage auditors and ensure compliance for ISO 27001 and other data privacy audits.
- Answer questions or resolve issues reported by the external security researchers & bug bounty hunters.
- Investigate privacy breaches.
- Educate employees on data privacy & security.
- Prioritize security requirements based on their severity of impact and product roadmap.
- Maintain a balance of security and business values across the organisation.
- Web Application Security, Mobile Application Security, Web Application Firewall, DAST, SAST, Cloud Security (AWS), Docker Security, Manual Penetration Testing.
- Good hands-on experience in handling tools such as vulnerability scanners, Burp suite, patch management, web filtering & WAF.
- Familiar with cloud hosting technologies (ex. AWS, Azure). Understanding of IAM, RBAC, NACLs, and KMS.
- Experience in Log Management, Security Event Correlation, SIEM.
- Must have strong interpersonal skills and should be able to communicate complex ideas seamlessly in written and verbal communication.
Good to Have Skills:
- Online Fraud Prevention.
- Bug Bounty experience.
- Security Operations Center (SOC) management.
- Experience with Amazon AWS services (EC2, S3, VPC, RDS, Cloud watch).
- Experience / Knowledge on tools like Fortify and Nessus.
- Experience in handling logging tools on docker container images (ex. Fluentd).
● Research, propose and evaluate with a 5-year vision, the architecture, design, technologies,
processes and profiles related to Telco Cloud.
● Participate in the creation of a realistic technical-strategic roadmap of the network to transform
it to Telco Cloud and be prepared for 5G.
● Using your deep technical expertise, you will provide detailed feedback to Product Management
and Engineering, as well as contribute directly to the platform code base to enhance both the
Customer experience of the service, as well as the SRE quality of life.
● The individual must be aware of trends in network infrastructure as well as within the network
engineering and OSS community. What technologies are being developed or launched?
● The individual should stay current with infrastructure trends in the telco network cloud domain.
● Be responsible for the Engineering of Lab and Production Telco Cloud environments, including
patches, upgrades, and reliability and performance improvements.
Required Minimum Qualifications: (Education and Technical Skills/Knowledge)
● Software Engineering degree, MS in Computer Science or equivalent experience
● Years of experiences as an SRE, DevOps, Development and/or Support related role
● 0-5 years of professional experience for a junior position
● At least 8 years of professional experience for a senior position
● Unix server administration and tuning : Linux / RedHat / CentOS / Ubuntu
● You have deep knowledge in Networking Layers 1-4
● Cloud / Virtualization (at least two): Helm, Docker, Kubernetes, AWS, Azure, Google Cloud,
OpenStack, OpenShift, VMware vSphere / Tanzu
● You have in-depth knowledge of cloud storage solutions on top of AWS, GCP, Azure and/or
on-prem private cloud, such as Ceph, CephFS, GlusterFS
● DevOps: Jenkins, Git, Azure DevOps, Ansible, Terraform
● Backend Knowledge Bash, Python, Go (other knowledge of Scripting Language is a plus).
● PaaS Level solutions such as Keycloak for IAM, Prometheus, Grafana, ELK, DBaaS (such as MySQL,
About the Organisation:
The team at Coredge.io is a combination of experienced and young professionals alike having
many years of experience in working with Edge computing, Telecom application development
and Kubernetes. The company has continuously collaborated with the open source community,
universities and major industry players in furthering its goal of providing the industry with an
indispensable tool to offer improved services to its customers. Coredge.io has a global market
presence with its offices in US and New Delhi, India.
- Automate and maintain ML and Data pipelines at scale
- Collaborate with Data Scientists and Data Engineers on feature development teams to containerize and build out deployment pipelines for new modules
- Maintain and expand our on-prem deployments with spark clusters
- Design, build and optimize applications containerization and orchestration with Docker and Kubernetes and AWS or Azure
- 5 years of IT experience in data-driven or AI technology products
- Understanding of ML Model Deployment and Lifecycle
- Extensive experience in Apache airflow for MLOps workflow automation
- Experience is building and automating data pipelines
- Experience in working on Spark Cluster architecture
- Extensive experience with Unix/Linux environments
- Experience with standard concepts and technologies used in CI/CD build, deployment pipelines using Jenkins
- Strong experience in Python and PySpark and building required automation (using standard technologies such as Docker, Jenkins, and Ansible).
- Experience with Kubernetes or Docker Swarm
- Working technical knowledge of current systems software, protocols, and standards, including firewalls, Active Directory, etc.
- Basic knowledge of Multi-tier architectures: load balancers, caching, web servers, application servers, and databases.
- Experience with various virtualization technologies and multi-tenant, private and hybrid cloud environments.
- Hands-on software and hardware troubleshooting experience.
- Experience documenting and maintaining configuration and process information.
- Basic Knowledge of machine learning frameworks: Tensorflow, Caffe/Caffe2, Pytorch
- Must have good exposure working in SOAR (Security, Orchestration, Automation, Response)
- Strong knowledge in End user/ point security.
- Good hands on Cyber security like SIEM, IAM, PAM.
- Sound Knowledge into automated incident management using Demisto (or similar technology)
- Hands on creating playbooks in Python Scripting.