An experienced and hands-on Technical Architect to lead our Video analytics & Surveillance product
• An ideal candidate would have worked in large scale video platforms (Youtube, Netflix, Hotstar, etc) or Surveillance softwares
• As a Technical Architect, you are hands-on and also a top contributor to the product development
• Leading teams under time-sensitive projects
• Expert level Python programming language skills is a MUST
• Hands-on experience with Deep Learning & Machine learning projects is a MUST
• Has to experience in design and development of products
• Review code & mentor team in improving the quality and efficiency of the delivery
• Ability to troubleshoot and address complex technical problems.
• Has to be a quick learner & ability to adapt to increasing customer demands
• Hands-on experience in design and deploying large scale docker and Kubernetes
• Can lead a technically strong team in sharpening the product further
• Strong design capability with microservices-based architecture and its pitfalls
• Should have worked in large scale data processing systems
• Good understanding of DevOps processes
• Familiar with Identity management, Authorization & Authentication frameworks
• Possesses very strong Software Design, enterprise networking systems, advanced problem-solving skills
• Experience writing technical architecture documents
About Pivotchain Solution
Cloud DevOps Architect
· Practices self-leadership and promotes learning in others by building relationships with cross- functional stakeholders; communicating information and providing advice to drive projects forward; adapting to competing demands and new responsibilities; providing feedback to others; mentoring junior team members; creating and executing plans to capitalize on strengths and improve opportunity areas; and adapting to and learning from change, difficulties, and feedback.
· Ensure appropriate translation of business requirements and functional specifications into physical program designs, code modules, stable application systems, and software solutions by partnering with Business Analysts and other team members to understand business needs and functional specifications.
· Build use cases/scenarios and reference architectures to enable rapid adoption of cloud services
in Product’s cloud journey.
· Provide insight into recommendations for technical solutions that meet design and functional needs.
· Experience or familiarity with Firewall/NGFW deployed in a variety of form factors (Checkpoint, Imperva, Palo Alto, Azure Firewall).
· Establish credibility & build deep relationships with senior technical individuals to enable them to be cloud advocates.
· Participate in deep architectural discussions to build confidence and ensure engineering success when building new and migrating existing applications, software. and services to AWS and GCP.
· Conduct deep-dive hands-on education/training sessions to transfer knowledge to DevOps and engineering teams considering or already using public cloud services.
· Be a cloud (Amazon Web Services, Google Cloud Platform) and DevOps evangelist and advise the stakeholders on cloud readiness, workload identification, migration and identifying the right multi cloud mix to effectively accomplish business objectives.
· Understands engineering requirements and architect scalable solutions adopting DevOps and leveraging advanced technologies such as AWS CodePipelines, AWS Code-Commit, ECS containers, API Gateway, CloudFormation Templates, AWS Kinesis, Splunk, Dome9, AWS-SQS, AWS-SNS, SonarCube, Microservices, and Kubernetes to realize stronger benefits and future proof outcomes for customer-facing applications.
· Build use cases/scenarios and reference architectures to enable rapid adoption of cloud services in product’s cloud journey.
· Be an integral part of the technology and architecture community in the public cloud partners (AWS, GCP, Azure) and bring in new services launched by cloud providers into 8K Miles Product Platform scope.
· Capture and share best-practice knowledge amongst the DevOps and Cloud community.
· Act as a technical liaison between product management, service engineering, and support teams.
o Master’s Degree in Computer Science/Engineering with 12+ years’ experience in information technology (networking, infrastructure, database).
o Strong and recent exposure to AWS/GCP/Azure Cloud platforms and designing hybrid multi cloud solutions. Preferred to be Certified AWS Architect professional or similar
· Working knowledge of UNIX shell scripting.
· Strong hands-on programming experience in Python
· Working knowledge of data visualization tools – Tableau.
· Experience working in cloud environment — AWS.
· Experience working with modern tools in the Agile Software Development Life Cycle.
· Version Control Systems (Ex. Git, Github, Stash/Bitbucket), Knowledge Management (Ex. Confluence, Google Docs), Development Workflow (Ex. Jira), Continuous Integration (Ex. Bamboo), Real Time Collaboration (Ex. Hipchat, Slack).
Role: Platform and Infrastructure Engineer SDE3
Title: Platform and Infrastructure Engineer SDE3
Location: We are open to candidates working from anywhere in India/across the globe. We are fully remote.
Lummo (formerly Bukukas) is a SaaS startup seeking to empower entrepreneurs and brands in SEA to accelerate their growth and to serve their customers by giving them the best technology and partner solutions. Lummo offers localized solutions made for SEA, thereby shining the spotlight on entrepreneurs and brands, enabling them to discover all possibilities to grow their business. Lummo was founded as BukuKas in 2019 by serial entrepreneurs Krishnan Menon and Lorenzo Peracchione.
The journey started with BukuKas, an app to digitize the physical record-keeping books by enabling micro and small enterprises to record their sales, expenses, and cash transactions at ease using their smartphone.
Lummo's flagship product, LummoSHOP (formerly Tokko), helps growth-oriented entrepreneurs and brands unlock their full potential by helping them build a strong relationship with their consumers by selling to them directly (D2C), maximize operational efficiency across multiple channels & build their own brand online.
Backed by top venture capital firms including Sequoia Capital, Tiger Global, CapitalG (Google’s venture fund), Credit Saison, Speedinvest, and other prominent investors and entrepreneurs like Gokul Rajaram (DoorDash), Taavet Hinrikus (Founder, TransferWise), Sandeep Tandon (FreeCharge), Santiago Sosa (Founder, Nuvemshop), Nipun Mehra (Ula, Sequoia), and Amrish Rao (Pinelabs, Citrus pay).
Having raised more than $150 Million in funding with the backing of marquee global investors, Lummo has built a world-class team with top talent from across the world and is well poised to become a legendary SaaS company that will last beyond our lifetimes
We have recently received C series funding in January 2022, read more about us here
Requirements / Responsibilities
- You have experience of 7-8 years in building high-performance consumer-facing mobile applications at Product companies of a decent scale.
- You have experience developing products on Kubernetes and cloud providers like GCP and AWS.
- You know and have worked on service meshes like Istio, Linkerd.
- You can write, code and have experience in writing platform-level components. [ex Golang, python]
- You have experience with debugging production issues and writing RCAs.
- You have demonstrable stories of being on-call and how outages have been handled.
- You understand change management in-depth and are opinionated on the steps to push the change to production.
- You have worked with Cloud Native (CNCF) technologies.
- You have worked on Distributed Systems.
- You are an excellent collaborator & communicator. You know that start-ups are a team sport. You listen to others, aren’t afraid to speak your mind and always try to ask the right questions.
- You are excited by the prospect of working in a distributed team and company.
What do we offer?
- The ability for you to make an impact and lay a foundation for the upcoming fin-tech innovations
- A multicultural and diverse team of colleagues from all over the globe
- Mission-driven and fast-paced, entrepreneurial environment
- Competitive salary and flexible leave policy
- A collaborative and flat company culture
What’s in it for you?
Do you truly want to make a difference and revolutionize the lives of millions of business owners? Do you thrive in an environment where moving at light speed and embracing new challenges every day is essential? If yes, Lummo is the perfect place for you!
place for you!
Position: Site Reliability Engineer
Location: Pune (Currently WFH, post pandemic you need to relocate)
About the Organization:
A funded product development company, headquarter in Singapore and offices in Australia, United States, Germany, United Kingdom, and India. You will gain work experience in a global environment.
We are looking for an experienced DevOps / Site Reliability engineer to join our team and be instrumental in taking our products to the next level.
In this role, you will be working on bleeding edge hybrid cloud / on-premise infrastructure handing billions of events and terabytes of data a day.
You will be responsible for working closely with various engineering teams to design, build and maintain a globally distributed infrastructure footprint.
As part of role, you will be responsible for researching new technologies, managing a large fleet of active services and their underlying servers, automating the deployment, monitoring and scaling of components and optimizing the infrastructure for cost and performance.
- Ensure the operational integrity of the global infrastructure
- Design repeatable continuous integration and delivery systems
- Test and measure new methods, applications and frameworks
- Analyze and leverage various AWS-native functionality
- Support and build out an on-premise data center footprint
- Provide support and diagnose issues to other teams related to our infrastructure
- Participate in 24/7 on-call rotation (If Required)
- Expert-level administrator of Linux-based systems
- Experience managing distributed data platforms (Kafka, Spark, Cassandra, etc) Aerospike experience is a plus.
- Experience with production deployments of Kubernetes Cluster
- Experience in automating provisioning and managing Hybrid-Cloud infrastructure (AWS, GCP and On-Prem) at scale.
- Knowledge of monitoring platform (Prometheus, Grafana, Graphite).
- Experience in Distributed storage systems such as Ceph or GlusterFS.
- Experience in virtualisation with KVM, Ovirt and OpenStack.
- Hands-on experience with configuration management systems such as Terraform and Ansible
- Bash and Python Scripting Expertise
- Network troubleshooting experience (TCP, DNS, IPv6 and tcpdump)
- Experience with continuous delivery systems (Jenkins, Gitlab, BitBucket, Docker)
- Experience managing hundreds to thousands of servers globally
- Enjoy automating tasks, rather than repeating them
- Capable of estimating costs of various approaches, and finding simple and inexpensive solutions to complex problems
- Strong verbal and written communication skills
- Ability to adapt to a rapidly changing environment
- Comfortable collaborating and supporting a diverse team of engineers
- Ability to troubleshoot problems in complex systems
- Flexible working hours and ability to participate in 24/7 on call support with other team members whenever required.
|• Deliver and support the deployment of Red Hat Ansible Automation Platform automation for enterprises|
|• Design, create, and deliver content that will enable support automation solutions at scale|
|• Working experience(min 6 months) in Ansible, RESTful APIs,|
|• Experience implementing a continuous integration (CI) or continuous development (CD) pipeline|
|• Intermediate-level scripting skills or Python|
|• Very good analytical/problem solving skills,|
|• Working experience in any one virtualized platform (VMware/Red Hat/Microsoft)|
|• Infrastructure(server/storage/network) management experience(desirable)|
|• Relational Database concepts(desirable)|
|• Understanding of cloud concepts|
|• 3+ Years of Hands-on Red Hat Ansible Automation Platform & DevOps Experience|
We are open to hiring for this role either in Hyderabad / Bengaluru!!
Cambridge Technology is seeking a hands-on GCP Professional who can visualize, design, solutionize and deliver mission-critical applications on GCP.
You are required to be hands-on on various aspects of GCP architecture and have successfully helped customers adopt the usage of the cloud, public, private, and hybrid. You will be accountable for designing and delivering reliable, secure, cost-optimized, performance-efficient, and operationally excellent solutions. You shall be working as a core team member of CT’s global delivery team and spearheading most technical decisions.
No of Positions
- Minimum of 4+yrs experience, and 2+yrs on GCP, Certification on GCP will be highly desired.
- Experience with architecting and delivering reliable, secure, cost-optimized, performance-efficient, and operationally excellent solutions on-premises and/or the cloud
Roles & Responsibilities
- Design, architect, deploy, and manage computes, storage, and NW services in GCP.
- Deployment of infrastructure on Google Cloud Platform using various GCP services like App Engine, Compute Engine, Cloud Storage, Cloud SQL, Cloud Load Balancing, Stack driver monitoring.
- Implement GCP Infrastructure automation through Ansible Tower, terraform & Gcloud for auto-provisioning, code deployments, software installation, and configuration updates.
- Analyze pre-requisites required for the creation of any GCP Project Foundation.
- Migrate workload from other cloud environments to GCP.
- Design and building 3 tier web application on GCP
- Strong understanding of data and information architecture, including experience with Google Cloud Technologies, relational databases, real-time streaming, and batch data processing.
- Manage large scale infrastructure refreshes, consolidating and modernizing technology footprints, cloud integration and migration, and network traffic/capacity optimization
- Interact with internal technical stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and consulting services
- Highly skilled in delivering & managing large complex global infrastructure spanning across multiple data centers and Private & Public Cloud, from the stage of design, build to management & operation
We are seeking a Security Program Manager to effectively drive Privacy & Security Programs in collaboration with cross functional teams. You will partner with engineering leadership, product management and development teams to deliver more secure products.
Roles & Responsibilities:
- Work with multiple stakeholders across various departments such as IT, Engineering, Business, Legal, Finance etc to implement controls defined in policies and processes.
- Manage projects with security and audit requirements with internal and external teams and serve as a liaison among all stakeholders.
- Managing penetration tests and security reviews for core applications and APIs.
- Identify, create and guide on privacy and security requirements considering applicable Data Protection Laws and implement them across software modules developed at Netmeds.
- Brainstorm with engineering teams to figure out how privacy and security controls can be applied to Netmeds tech stack.
- Coordination with Infra Teams and Dev Teams on DB and application hardening, standardization of server images / containerization.
- Assess vendors' security posture before onboarding them and after they qualify, review their security posture at a set frequency.
- Manage auditors and ensure compliance for ISO 27001 and other data privacy audits.
- Answer questions or resolve issues reported by the external security researchers & bug bounty hunters.
- Investigate privacy breaches.
- Educate employees on data privacy & security.
- Prioritize security requirements based on their severity of impact and product roadmap.
- Maintain a balance of security and business values across the organisation.
- Web Application Security, Mobile Application Security, Web Application Firewall, DAST, SAST, Cloud Security (AWS), Docker Security, Manual Penetration Testing.
- Good hands-on experience in handling tools such as vulnerability scanners, Burp suite, patch management, web filtering & WAF.
- Familiar with cloud hosting technologies (ex. AWS, Azure). Understanding of IAM, RBAC, NACLs, and KMS.
- Experience in Log Management, Security Event Correlation, SIEM.
- Must have strong interpersonal skills and should be able to communicate complex ideas seamlessly in written and verbal communication.
Good to Have Skills:
- Online Fraud Prevention.
- Bug Bounty experience.
- Security Operations Center (SOC) management.
- Experience with Amazon AWS services (EC2, S3, VPC, RDS, Cloud watch).
- Experience / Knowledge on tools like Fortify and Nessus.
- Experience in handling logging tools on docker container images (ex. Fluentd).
- You have 2+ years of experience with production GCP/AWS; Experience with Kubernetes is a plus
- You have 3+ years debugging network and system security issues
- You have experience in developing security training and guide the internal development teams
- You design and implement best practices concerning information security
- You can create programs to implement Identity and Access Management
- You have to evolve the bug bounty program and provide support.
- You have to develop automated security testing.
- You have worked on cloud-native technologies.
- You have to triage security issues and provide recommended fixes.
- You are an excellent collaborator & communicator.
- You know that start-ups are a team sport.
- You listen to others, aren’t afraid to speak your mind and always try to ask the right questions.
- You are excited by the prospect of working in a distributed team and company.
Senior Cloud Infrastructure Engineer (Azure)
Department & Team
Enterprise Solutions Architect
The purpose of the role is to ensure high systems availability across a multi-cloud environment, enabling the business to continue meeting its objectives.
This role will be mostly Azure / Windows / Active Directory / Azure AD focused but will include a requirement to understand comparative solutions in AWS.
Desire to maintain full hands-on status but to add Team Lead responsibilities in future
Client’s cloud strategy is based around a dual vendor solutioning model, utilising AWS and Azure services. This enables us to access more technologies and helps mitigate risks across our infrastructure.
The Infrastructure Services Team is responsible for the delivery and support of all infrastructure used by Client twenty-four hours a day, seven days a week. The team’s primary function is to install, maintain, and implement all infrastructure-based systems, both On Premise and Cloud Hosted. The Infrastructure Services group already consists of three teams:
1. Network Services Team – Responsible for IP Network and its associated components
2. Platform Services Team – Responsible for Server and Storage systems
3. Database Services Team – Responsible for all Databases
This role will report directly into the Enterprise Solutions Architect and will have responsibility for the day to day running of the Azure public cloud platform, as well as playing a key part in designing best practise solutions. It will enable the Client business to achieve its stated objectives by playing a key role in the Infrastructure Services Team to achieve world class benchmarks of customer service and support.
· Deliver end to end technical and user support across all cloud platforms (Primarily Azure)
· Day to day, fully hands-on OS management responsibilities (Primarily Windows Server OS)
· Contribute to continuous improvement efforts around cost optimisation, security enhancement, performance optimisation, operational efficiency and innovation.
· Take an ownership role in delivering technical projects, ensuring best practise methods are followed.
· Design and deliver solutions around the concept of “Planning for Failure”. Ensure all solutions are deployed to withstand system / AZ failure.
· Work closely with Cloud Architects / Infrastructure Services Manager to identify and eliminate “waste” across cloud platforms.
· Ensure robust tagging strategy followed in the organisation with accurate cost allocation of resources in Azure.
· Ensure all Client data in all forms are backed up in a cost-efficient way.
· Use the appropriate monitoring tools to ensure all cloud / on-premise services are continuously monitored.
· Drive utilisation of most efficient methods of resource deployment.
· Drive the adoption, across the business, of serverless / open source / cloud native technologies where applicable.
· Ensure system documentation remains up to date and designed according to Azure best practise templates.
· Participate in detailed architectural discussions, calling on internal/external subject matter experts as needed, to ensure solutions are designed for successful deployment.
· Take part in regular discussions with business executives to translate their needs into technical and operational plans.
· Engaging with vendors regularly in terms of verifying solutions and troubleshooting issues.
· Designing and delivering technology workshops to other departments in the business.
· Takes initiatives for improvement of service delivery.
· Ensure that Client delivers a service that resonates with customer’s expectations, which sets Client apart from its competitors.
· Help design necessary infrastructure and processes to support the recovery of critical technology and systems in line with contingency plans for the business.
· Continually assess working practices and review these with a view to improving quality and reducing costs.
· Champions the new technology case and ensure new technologies are investigated and proposals put forward regarding suitability and benefit.
· Motivate and inspire the rest of the infrastructure team and undertake necessary steps to raise competence and capability as required.
· Help develop a culture of ownership and quality throughout the Infrastructure Services team.
Skills & Experience:
· Microsoft Azure Solutions Architect Expert AZ-303– REQUIRED
· Microsoft Certified Professional ( MCP ) – REQUIRED
· AWS Certified Cloud Practicioner - Preferred
· Must be able to demonstrate working knowledge of designing, implementing and maintaining best practise Azure solutions.
· Strong working knowledge of on-prem Active Directory, including GPO and Azure AD
· Proven examples of ownership of large Azure project implementations in Enterprise settings.
· Enterprise production experience of deploying infrastructure as code using Terraform
· Experience managing the monitoring of infrastructure / applications using tools including Cloud native, Solarwinds, New Relic, etc.
· Must have practical working knowledge of driving cost optimisation, security enhancement and performance optimisation.
· Solid understanding and experience of transitioning IaaS solutions to serverless technology
· Need to be able to demonstrate security best-practise when designing solutions in Azure.
· Working experience of ‘On Premise to Cloud’ migrations
· Good working knowledge around WAN connectivity and how this interacts with the various entry point options into Azure public cloud.
· Production knowledge of Windows file servers / DFS
· Strong experience in desktop virtualisation technologies in Azure
· Good appreciation of ISO27001, ITIL and Project management
· Good understanding of new and emerging technologies
· Excellent presentation skills to both an internal and external audience
· The ability to share your specific expertise to the rest of the Technology group
· Professional appearance and manner
· High personal drive; results oriented; makes things happen; “can do attitude”
· Can work and adapt within a highly dynamic and growing environment
· Team Player; effective at building close working relationships with others
· Effectively manages diversity within the workplace
· Strong focus on service delivery and the needs and satisfaction of internal clients
· Able to see issues from a global, regional and corporate perspective
· Able to effectively plan and manage large projects
· Excellent communication skills and interpersonal skills at all levels
· Strong analytical, presentation and training skills
· Innovative and creative
· Demonstrates technical leadership
· Visionary and strategic view of technology enablers (creative and innovative)
· High verbal and written communication ability, able to influence effectively at all levels
· Possesses technical expertise and knowledge to lead by example and input into technical debates
· Depth and breadth of experience in infrastructure technologies
· Enterprise mentality and global mindset
· Sense of humour
Role Key Performance Indicators:
· Design and deliver repeatable, best in class, cloud solutions.
· Pro-actively monitor service quality and take action to scale operational services, in line with business growth.
· Generate operating efficiencies, to be agreed with Infrastructure Services Manager.
· Establish a “best in sector” level of operational service delivery and insight.
· Help create an effective team.
Certified Ethical Hacker Requirements:
- Bachelor’s degree in Information Technology or Computer Science.
- CEH Certification.
- Proven work experience of at least 2-5 years as a Certified Ethical Hacker.
- Advanced knowledge of networking systems and security software.
- In-depth knowledge of parameter manipulation, session hijacking, and cross-site scripting.
- Technical knowledge of routers, firewalls, and server systems.
- Good written and verbal communication skills.
- Good troubleshooting skills.
- Ability to see big-picture system flaws.
- B.Tech/B.E.(IT/Computers), B.Sc( Computers), MSc (IT), BCA (Computer) or any equivalent graduation or post-graduation
- Must have good exposure working in SOAR (Security, Orchestration, Automation, Response)
- Strong knowledge in End user/ point security.
- Good hands on Cyber security like SIEM, IAM, PAM.
- Sound Knowledge into automated incident management using Demisto (or similar technology)
- Hands on creating playbooks in Python Scripting.