Locus is a global decision- making platform in the supply chain that uses deep learning and proprietary algorithms to provide route optimization, real-time tracking, insights and analytics, beat optimization, efficient warehouse management, vehicle allocation and utilization, intuitive 3D packing and measurement of packages. Locus automates human decisions required to transport a package or a person, between any two points on earth, delivering gains along efficiency, consistency, and transparency in operations. Locus, which has achieved a peak of 1 million orders processed in a day (200,000 orders an hour) and is trained & tested on over 100 million+ order deliveries, works in 75 cities across the globe. Locus works with several large-scale market leaders like Urban Ladder, Tata Group of Companies, Droplet, Licious, Rollick, Lenskart, other global FMCG, pharma, e-commerce, 3PL and logistics conglomerates. Locus is backed by some of the biggest names in the market and recently raised $4 Mn in a pre-Series B round. Earlier, In 2016, Locus raised $2.75 Mn (INR 18.3 Cr) in a Series A funding. Locus was started by Nishith Rastogi and Geet Garg, two ex-Amazon engineers on a mission to democratize logistics intelligence for businesses across industries. Nishith was profiled by Forbes Asia in their ’30 Under 30’ 2018 list. Geet, on the other hand, holds a dual degree (BTech and MTech) in Computer Science and Engineering from the Indian Institute of Technology. Our team constitutes of engineers from Indian Institute of Technology and Birla Institute of Technology & Science- Pilani, and Data Scientists with PhDs from Carnegie Mellon University and Tata Institute of Fundamental Research. Our multifaced product and business team is led by senior members from Barclays, Google & Goldman Sachs with immense operational execution experience. Office Location - http://where.locus.sh Responsibilities: Ownership to design, implement and maintain the infrastructure for the Locus platform hosted across multiple hosting providers like AWS and Azure across the world Develop and maintain (CI)/(CD) process for enterprise applications, using various tools like Cloudwatch, NewRelic, Nagios etc. Automate any manual process across deployment, monitoring or audit related functions Implement monitoring solutions to assess the health of the infrastructure Responsible for all audit trails along with relevant alarms to detect any anomalies Ensure we are releasing code safely, securely and frequently Work with developers to optimise applications for performance and scale Build and mentor the team at Locus. We currently have 1 DevOps engineer. We are looking at a team of 4 + 1 lead. You will be the lead Must Haves: 4+ years of experience as a DevOps engineer, in a product company Bachelors Degree in Computer Science or equivalent Expertise in Linux System Admin and Bash Scripting Experience with configuration management tools such as Ansible, Chef, Fabric, Puppet or SaltStack Hands-on experience in build and administer VMs and Containers using tools such as Docker, Vagrant, Kubernetes Strong scripting skills Strong experience in designing Cloud Infrastructure at AWS/Azure. Experience in Infrastructure as Code services like CloudFormation, Terraform etc. Experience in infrastructure as Code services like CloudFormation, Terraform etc is a huge bonus Worked on monitoring tools like Cloudwatch, NewRelic, Nagios etc Perks: Healthy catered meals at office You decide your own Work From Home (WFH) and Out Of Office (OOO) Pet-friendly - bring your pets to the office any day. Locus family already has a Rottweiler and a Beagle
Locus is a state-of-the-art technology platform that automates and optimises your logistic operations to improve efficiency, consistency & transparency. Founded in 2015, by IIT & BITS alumnus, Locus uses proprietary algorithms to automate human decisions involved in transporting a package or person between two locations. A sophisticated route optimization engine that factors real world problems like incomplete addresses, unpredictable traffic, legal route restrictions, time windows & more, to reduce human dependency and business cost. Use-cases exists across industries varying from first & last mile to intra-city movement, as well as primary distribution across cities. Additionally, the simulation engine can optimize your network, plan efficient sales beat for your workforce factoring in routing, skill level, past familiarity, fairness & much more. Several large scale market leaders in India like BigBasket, Unilever, Marico, Urban Ladder, Gati, Tata Group & others are productions customers of Locus. Locus works on a unique pay-as-you-go, gain sharing model where the customer pays only on usage and shares a portion of the savings accrued. Locus was started by two ex-Amazon engineers on a mission to democratize logistics intelligence for businesses across industries. Our team constitutes of engineers from Indian Institute of Technology and Birla Institute of Technology & Science- Pilani, and Data Scientists with PhDs from Carnegie Mellon University and Tata Institute of Fundamental Research. Our multifaced product and business team is led by senior members from Barclays, Google & Goldman Sachs with immense operational execution experience. Office Location - http://where.locus.sh Job Description :- Design, implement and maintain the infrastructure for the Locus platform hosted across multiple hosting providers like AWS and Azure across the world. Automate any manual process across deployment, monitoring or audit related functions.Implement monitoring solutions to assess the health of the infrastructure. Ensure that audit trails are in place along with relevant alarms to detect any anomalies. Ensure we are releasing code safely, securely and frequentlyWork with developers to optimize applications for performance and scale. Requirements: 1 or more years of experience as a DevOps Engineer. Strong scripting skills. Strong Experience in designing Cloud Infrastructure at AWS/Azure.Experience in Infrastructure as Code services like CloudFormation, Terraform etc. Experience with CI/CD systems. Experience with monitoring tools like Cloudwatch, NewRelic, Nagios etc Perks : 24 hours free food Unaccounted leaves - put OOO on team calendar & take off whenever you want Pet friendly - bring your pets to office - we have a Rotweiler (Onyx) & a Beagle (Bucky) in office most of the days Project Skynet : We are internally working on a tool that is AI based & help finding anomaly in the production systems. This would help us develop smart alarms that go beyond the regular Cloudwatch alarms. If this project becomes a success, we believe the companies can cut their DevOps workforce by half. Come be part of the journey if you like to work on interesting problems.
Role : We seek a top-performing technical leader with the passion, experience and gravitas to effectively lead and contribute to this critical technology function. The ideal candidate will be a high energy, team oriented, customer driven problem solver with prior experience in designing and implementing highly scalable platforms or applications for small businesses and enterprises. - Strong technical leadership skills and a proven track record of working in high performing teams. - Solid understanding of cloud-based platforms and microservice architecture. Proven track record of architecting, building, deploying and testing large scale distributed systems handling millions of requests per minute. - Experience creating scalable systems and services, including demonstrated understanding of best practices for API design and overall platform design considerations such as auth, messaging, logging, monitoring and testability. - In-depth knowledge of Java web-based applications. - Strong Linux or Unix systems experience. - Experience troubleshooting large scale cloud-based applications. - Exceptionally strong written and verbal communications skills, as well as good interpersonal and organization skills.
About Us UpGrad is an IIT Delhi alumni and Ronnie Screwvala founded company where we focus on enabling universities to take their programs online. Given team's background in education and media sectors, we understand what it takes to offer quality online programs, and at UpGrad - we invest alongside universities to build and deliver quality online programs (content, platform, technology, industry collaboration, delivery, and grading infrastructure). You can read about some of our press releases at - • UpGrad was earlier selected as one of the top ten most innovative companies in India by FastCompany. • We were also covered by the Financial Times along with other disruptors in Ed-Tech • UpGrad is the official education partner for Government of India - Startup India program too • We were also ranked as one of the top 25 Startups in India 2018 • Our program with IIIT B has been ranked #1 program in the country in the domain of Artificial Intelligence and Machine Learning At UpGrad - we have partnered with leading universities such as IIIT Bangalore, BITS Pilani, MICA Ahmedabad, IMT Ghaziabad and Cambridge University's Judge Business School to offer programs in the domains of Data, Technology and Management. Role and Responsibilities 1. Administration of virtual learning lab: Handle the setup and administration of the virtual labs to be used by the students enrolled in various courses like Big Data, Data Analytics. The students use these labs for practice and also run their assignments. 2. Student experience (post-program launch): Assist students with their academic doubts related to the virtual labs and ensure students have a great learning experience on the UpGrad platform 3. Academic quality assurance: Help create learning material with an in-house team of instructional designers and review its technical quality. What we are looking for: 1. 3-4 years project experience deploying cloud solutions (experience on Amazon Web Services (AWS) is mandatory) 2. Hands-on experience in setting up and day to day administration of Hadoop Ecosystem Tools(Hadoop, Spark, Storm, Hbase), NoSQL, Visualisations, etc. 3. Must be a problem solver with demonstrated experience in solving difficult technology challenges, with a can-do attitude 4. Hands-on working with private or public cloud services in a highly available and scalable production environment. 5. Experience building tools and automation that eliminate repetitive task 6. Hands on experience with Service Cloud, including User Permissions, Roles, Objects, Validation Rules, Process Builder, Workflow Rules, Communities, Visual Workflow, Email to Case, Case Management
Cloud Architect Should have 15+ years overall IT industry experience. 6+ years of architecture, design, implementation of highly distributed applications (i.e. having an architectural sense for ensuring availability, reliability, etc.). Deep understanding of cloud computing technologies, business drivers, emerging computing trends, and deployment. Experience with one or more NoSQL like (MongoDB, Cassandra). Deep technical experience in one or more of the following areas: Software design or development, Cloud Application Design, Mobility, PaaS, Media Services, CDN. Working knowledge with AGILE development, SCRUM and Application Lifecycle Management (ALM). Deep programming skills in one or more languages- C#/C++/Java/ Python. Good to have · Experience in deploying applications in CaaS – Docker, Kubernetes platforms is a huge plus. · Good level experience designing solutions using advanced patterns such as microservices / event-driven architectures. · CI/CD delivery using code management, configuration management and automation tools such as Git, VSTS, Ansible, DSC, Puppet, Chef, Salt, Jenkins, Maven, etc
Requirement: • Hands on complete Design / Implementation of CI/CD Processes with Git • Hands-on in-depth knowledge of GCP like VPNs, VPCs, Storage, Tagging and monitoring, alerting of resources • Experience in development of access control lists • Hands on experience in containerization (Docker, Kubernetes) • Experience in with automation using any scripting language • Hands on experience in configure and manage data sources like MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc. • Experience in monitoring services via tools such as Graylog, Syslog, NewRelic, Prometheus, Nagios, etc. Roles & Responsibilities: • Architecting, designing, implementing and supporting of projects using cloud technologies • Production support responsibilities for software and infrastructure fixes • Day to day operational support of continuous integration, release and source control tooling • Work with software, infrastructure, network engineers and DBAs to solve productivity challenges, drive efficiency, automate and streamline environment builds • Co-own monitoring & alert configuration to detect, triage and resolve issues quickly • Build tools to enhance production triage and improve time to detect issues You must be: • A team player who likes to work hard and play harder, have excellent interpersonal, organizational and time-management skills. • Able to think strategically and analytically to effectively complete assigned work within given timelines. • Someone who possesses excellent written and oral communication skills and have an attention to detail • A person with an ability to multi-task on multiple projects and tasks at the same time • A person who gives importance to attention to detail and be highly organized • Positive and upbeat with the ability to learn quickly • Be able to laugh. At others and most importantly at yourself. Need a sense of humor. You can expect: • A fast-paced, high-growth startup environment where you will gain a career and not just a job. • The company to invest in your personal and professional development. We support your ongoing education and training by reimbursing you for relevant educational courses. • An open office culture, no cabins or cubicles and a place that is looking for your input to help us grow • The support of your teammates to always do better. Own it and win together! • Exposure of International Retail market. Learn about a high growth industry and build critical skill-sets. • Excellent employee referral program. Refer your friends, work with your friends and be awarded for it. • Work along with smart, creative and energetic team who truly believe in ‘working hard and partying harder!’ Educational Requirement: • UG - B.Tech/B.E. – Computer Science/ IT or equivalent • PG - M.Tech - Computer Science/ IT, MCA – Computers or equivalent
SENIOR C# DEVELOPER At The NDL Group we support big brands, media owners and agencies, delivering global promotion & rewards programmes, designed to wow their staff and customers. We are headquartered in London and have been in business for more than 20 years. Using our proprietary technology system, Promotigo™, we deliver increased efficiency across multi-territories through a single platform. Promotigo™ handles complex procedures such as high volume code verification or global cashback payments, while providing real-time accountability and measurement. NDL has been behind some of Europe’s biggest and best-known promotional campaigns. With clients as diverse as McDonald’s, Universal, XBOX and Nestle, we have built a firm reputation for delivering successful promotional strategies, underpinned by reliable technology platforms, inspirational prize content and 5 star winner fulfillment. AS A SENIOR DEVELOPER, YOU WILL BE RESPONSIBLE FOR... ● Develop our core web application using service-oriented architecture exposing APIs for internal and external clients. ● Implement architecture and design patterns to help ensure that systems scale. ● Perform unit and integration testing before launch ● Establish processes and best practices around development standards. ● Review product requirements in order to give development estimates and product feedback. ● Apply technical expertise to challenging architecture and design problems. SKILLS / COMPETENCIES: ● 6+ years experience in developing enterprise-grade web applications with C# / .NET and SQL Server. ● Experience with software design patterns like MVC, MVVM, etc. ● Knowledge in frontend development (HTML, CSS, JS, Angular) is an added advantage. ● Experience building applications using service-oriented architecture and APIs. ● Hands-on experience on Microsoft Azure is an added advantage. ● Strong English communication skills, both written and spoken. ● Ability to work and communicate clearly and efficiently with team members. If you are a big fish that wants to swim with other big fish in a fast-growing company, joining The NDL Group might be your best next career move. Contact us now!
Job Description SigTuple is seeking Systems or Backend Engineers engineers to build a highly scalable platform for running deep learning based medical solutions. The engineering team of SigTuple is responsible for simplifying complex workflows of medical analyses into an elegant software design to make it scalable and distribute it across hundreds of machines. In this role, you'll utilize a combination of systems design experience, network knowledge, troubleshooting skills, and programming to to ingest terabytes of data, analyse it using distributed computing and deliver infrastucture and storage platform services. The ideal candidate is a technical generalist with skills ranging from production systems/network management to software development. Experience with delivering in cloud based platforms is a must. Responsibilities 1. Design and create distributed system software frameworks for processing data in a near real-time manner. 2. Create next generation software for data scientists to analyse and train their AI models on which require you to understand data science and AI. 3. Complete ownership of infrastructure components and automation of operational activities. 4. Ensuring reliability of all systems made by various development teams. Requirements 1. BTech/MTech in any engineering discipline. 2. 3-6 years of experience in an Backend or Systems Role. 3. Experience in management of cloud computing services. Extensive knowledge of any one cloud platform (AWS, Azure, OpenStack etc.) 4. Proficiency with OS and network fundamentals 5. Experience of working with scale is a must 6. Hands on experience with machine learning will be a plus
Site Reliability EngineerJob Description :You will be administering the infrastructure of an indigenous one-of-a-kind artificial intelligence cloud platform. You will be working with the dev teams to deploy, monitor and scale the distributed platform to handle real time AI analysis and loads and loads of visual data (images and videos in various formats). We're looking for people with extensive dev-ops experience and strong system programming skills.Responsibilities :1. You will be responsible for the up time and reliability of infrastructure of SigTuple and help backend teams achieve it by writing reliable software and automation2. Work with other development teams to automate deployment of modules and manage the build and release pipeline.3. Extensive process-level and node-level monitoring and auto healing of entire cluster.4. Managing, provisioning and servicing cloud servers.5. Contribution to back-end services to contribute to its infrastructure system design.Requirements :1. BTech/MTech in any engineering discipline.2. 3-6 years of experience in an Dev-Ops/Software Engineering role.3. Experience in management of cloud computing services. Extensive knowledge of any one cloud platform (Kubernetes, AWS, GCP, Azure etc.)4. Proficiency with any major monitoring framework (Sensu, Nagios etc.).5. Comfortable with any one scripting language (Python, Perl) and a Configuration management or Orchestration Tool (Ansible, Chef etc)6. Proficiency with OS and network fundamentals and strong Linux administrator skills.7. Experience with Container Tools (Docker ecosystem) will be a plus8. Experience of working with issues of scale of a system.9. Experience of working in a startup is a plus.
Skills Required: Cloud Scaling and Performance Management Cloud Deployment Infrastructure management for high Load Scaling Experience - 2-5 Years About Us: RecoSense is a Data Science Led Venture into Recommendation, Personalization and Machine Learning Framework. Email Id: email@example.com Regards, Swathi Udhyakumar
GE Digital is looking for game changers at a Sr. Staff Architect level to work on the Wall of Analytics (WOA) product. Reach out to me for more details.
Looking for performance engineer for Redhat cloudforms product ( Hybrid cloud). Need ruby, rails, vmware, rhev, cloud, virtualization, python, ansible.