Job brief : - We are looking for a Natural Language Processing Engineer to help us improve our NLP products and create new NLP applications.- This is a highly technical role and an excellent academic and/or industry background (with specialization in NLP) with Master or Ph.D. is preferred.Requirements :- Strong academic and/or industrial exposure to NLP - Experience in applying different NLP techniques to problems such as text classification, text summarization, question & answering, information retrieval, knowledge extraction, and conversational bots design potentially with both traditional & Deep Learning Techniques- NLP Skills/Tools: NLP, HMM, MEMM, P/LSA, CRF, LDA, Semantic Hashing, Word2Vec, Seq2Seq, spaCy, Nltk, Gensim, CoreNLP, NLU, NLG etc.,- Ability to design & develop practical analytical approach keeping the context of data quality & availability, feasibility, scalability, turnaround time aspects- Overall 4-8 years of professional NLP/chatbot development experience - Proficiency in scraping data from an external source and come up architectures to store and organize the information for insight generation- Experience in analyzing large amounts of user-generated content and process data in large-scale environments using cloud infrastructure is desirable- Contributions to open source software projects is an added advantage- Analyze and model structured data using advanced statistical methods and implement algorithmsSkills & Qualification :- Graduate/Post-Graduate/M.S in Computer Science/Mathematics/ Statistics/Machine Learning/NLP or allied fields.- Strong ability to implement modules which are reusable, scalable & performance efficient modules- Prior experiences with product development team & designed product features/solutions is a plusBehavioral Traits :- Ability to work in a demanding environment with limited guidance- Ability to come up with working solutions for customers while innovating for ideal solutions in a time-bound manner- Think through the design and assess trade-offs related to quality, time to implement, scalability & reusability of components developed- Ability to explore new tools/technologies to come up with potential solutions- Strong Analytical & communication skills.
Managing cloud deployments & containers using docker Managing Linux - upgrades, application deployments, backup/restores, system tuning & performance Managing & monitoring web servers, queuing systems, big data stream processing, databases like cassandra etc., for security & performance Unix shell scripting
Locus is a global decision- making platform in the supply chain that uses deep learning and proprietary algorithms to provide route optimization, real-time tracking, insights and analytics, beat optimization, efficient warehouse management, vehicle allocation and utilization, intuitive 3D packing and measurement of packages. Locus automates human decisions required to transport a package or a person, between any two points on earth, delivering gains along efficiency, consistency, and transparency in operations. Locus, which has achieved a peak of 1 million orders processed in a day (200,000 orders an hour) and is trained & tested on over 100 million+ order deliveries, works in 75 cities across the globe. Locus works with several large-scale market leaders like Urban Ladder, Tata Group of Companies, Droplet, Licious, Rollick, Lenskart, other global FMCG, pharma, e-commerce, 3PL and logistics conglomerates. Locus is backed by some of the biggest names in the market and recently raised $4 Mn in a pre-Series B round. Earlier, In 2016, Locus raised $2.75 Mn (INR 18.3 Cr) in a Series A funding. Locus was started by Nishith Rastogi and Geet Garg, two ex-Amazon engineers on a mission to democratize logistics intelligence for businesses across industries. Nishith was profiled by Forbes Asia in their ’30 Under 30’ 2018 list. Geet, on the other hand, holds a dual degree (BTech and MTech) in Computer Science and Engineering from the Indian Institute of Technology. Our team constitutes of engineers from Indian Institute of Technology and Birla Institute of Technology & Science- Pilani, and Data Scientists with PhDs from Carnegie Mellon University and Tata Institute of Fundamental Research. Our multifaced product and business team is led by senior members from Barclays, Google & Goldman Sachs with immense operational execution experience. Office Location - http://where.locus.sh Responsibilities: Ownership to design, implement and maintain the infrastructure for the Locus platform hosted across multiple hosting providers like AWS and Azure across the world Develop and maintain (CI)/(CD) process for enterprise applications, using various tools like Cloudwatch, NewRelic, Nagios etc. Automate any manual process across deployment, monitoring or audit related functions Implement monitoring solutions to assess the health of the infrastructure Responsible for all audit trails along with relevant alarms to detect any anomalies Ensure we are releasing code safely, securely and frequently Work with developers to optimise applications for performance and scale Build and mentor the team at Locus. We currently have 1 DevOps engineer. We are looking at a team of 4 + 1 lead. You will be the lead Must Haves: 4+ years of experience as a DevOps engineer, in a product company Bachelors Degree in Computer Science or equivalent Expertise in Linux System Admin and Bash Scripting Experience with configuration management tools such as Ansible, Chef, Fabric, Puppet or SaltStack Hands-on experience in build and administer VMs and Containers using tools such as Docker, Vagrant, Kubernetes Strong scripting skills Strong experience in designing Cloud Infrastructure at AWS/Azure. Experience in Infrastructure as Code services like CloudFormation, Terraform etc. Experience in infrastructure as Code services like CloudFormation, Terraform etc is a huge bonus Worked on monitoring tools like Cloudwatch, NewRelic, Nagios etc Perks: Healthy catered meals at office You decide your own Work From Home (WFH) and Out Of Office (OOO) Pet-friendly - bring your pets to the office any day. Locus family already has a Rottweiler and a Beagle
Locus is a state-of-the-art technology platform that automates and optimises your logistic operations to improve efficiency, consistency & transparency. Founded in 2015, by IIT & BITS alumnus, Locus uses proprietary algorithms to automate human decisions involved in transporting a package or person between two locations. A sophisticated route optimization engine that factors real world problems like incomplete addresses, unpredictable traffic, legal route restrictions, time windows & more, to reduce human dependency and business cost. Use-cases exists across industries varying from first & last mile to intra-city movement, as well as primary distribution across cities. Additionally, the simulation engine can optimize your network, plan efficient sales beat for your workforce factoring in routing, skill level, past familiarity, fairness & much more. Several large scale market leaders in India like BigBasket, Unilever, Marico, Urban Ladder, Gati, Tata Group & others are productions customers of Locus. Locus works on a unique pay-as-you-go, gain sharing model where the customer pays only on usage and shares a portion of the savings accrued. Locus was started by two ex-Amazon engineers on a mission to democratize logistics intelligence for businesses across industries. Our team constitutes of engineers from Indian Institute of Technology and Birla Institute of Technology & Science- Pilani, and Data Scientists with PhDs from Carnegie Mellon University and Tata Institute of Fundamental Research. Our multifaced product and business team is led by senior members from Barclays, Google & Goldman Sachs with immense operational execution experience. Office Location - http://where.locus.sh Job Description :- Design, implement and maintain the infrastructure for the Locus platform hosted across multiple hosting providers like AWS and Azure across the world. Automate any manual process across deployment, monitoring or audit related functions.Implement monitoring solutions to assess the health of the infrastructure. Ensure that audit trails are in place along with relevant alarms to detect any anomalies. Ensure we are releasing code safely, securely and frequentlyWork with developers to optimize applications for performance and scale. Requirements: 1 or more years of experience as a DevOps Engineer. Strong scripting skills. Strong Experience in designing Cloud Infrastructure at AWS/Azure.Experience in Infrastructure as Code services like CloudFormation, Terraform etc. Experience with CI/CD systems. Experience with monitoring tools like Cloudwatch, NewRelic, Nagios etc Perks : 24 hours free food Unaccounted leaves - put OOO on team calendar & take off whenever you want Pet friendly - bring your pets to office - we have a Rotweiler (Onyx) & a Beagle (Bucky) in office most of the days Project Skynet : We are internally working on a tool that is AI based & help finding anomaly in the production systems. This would help us develop smart alarms that go beyond the regular Cloudwatch alarms. If this project becomes a success, we believe the companies can cut their DevOps workforce by half. Come be part of the journey if you like to work on interesting problems.
Bachelor’s degree in computer science, engineering or related field. 2+ years of experience developing web applications 2+ Experience in one or more of technologies such as Angular, HTML5 Nice To Have Full-stack development Experience with one or more cloud platform Knowledge of containerization technologies. Responsibility Develop cloud application for Manufacturing Process Simulation Independent, creative and self-motivated team player with passion for constant learning and change Key Skills Primary Skills Node.JsCloudAngularHTML Secondary Skills Desired Profile Education BE
Cloud Architect Should have 15+ years overall IT industry experience. 6+ years of architecture, design, implementation of highly distributed applications (i.e. having an architectural sense for ensuring availability, reliability, etc.). Deep understanding of cloud computing technologies, business drivers, emerging computing trends, and deployment. Experience with one or more NoSQL like (MongoDB, Cassandra). Deep technical experience in one or more of the following areas: Software design or development, Cloud Application Design, Mobility, PaaS, Media Services, CDN. Working knowledge with AGILE development, SCRUM and Application Lifecycle Management (ALM). Deep programming skills in one or more languages- C#/C++/Java/ Python. Good to have · Experience in deploying applications in CaaS – Docker, Kubernetes platforms is a huge plus. · Good level experience designing solutions using advanced patterns such as microservices / event-driven architectures. · CI/CD delivery using code management, configuration management and automation tools such as Git, VSTS, Ansible, DSC, Puppet, Chef, Salt, Jenkins, Maven, etc
Requirement: • Hands on complete Design / Implementation of CI/CD Processes with Git • Hands-on in-depth knowledge of GCP like VPNs, VPCs, Storage, Tagging and monitoring, alerting of resources • Experience in development of access control lists • Hands on experience in containerization (Docker, Kubernetes) • Experience in with automation using any scripting language • Hands on experience in configure and manage data sources like MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc. • Experience in monitoring services via tools such as Graylog, Syslog, NewRelic, Prometheus, Nagios, etc. Roles & Responsibilities: • Architecting, designing, implementing and supporting of projects using cloud technologies • Production support responsibilities for software and infrastructure fixes • Day to day operational support of continuous integration, release and source control tooling • Work with software, infrastructure, network engineers and DBAs to solve productivity challenges, drive efficiency, automate and streamline environment builds • Co-own monitoring & alert configuration to detect, triage and resolve issues quickly • Build tools to enhance production triage and improve time to detect issues You must be: • A team player who likes to work hard and play harder, have excellent interpersonal, organizational and time-management skills. • Able to think strategically and analytically to effectively complete assigned work within given timelines. • Someone who possesses excellent written and oral communication skills and have an attention to detail • A person with an ability to multi-task on multiple projects and tasks at the same time • A person who gives importance to attention to detail and be highly organized • Positive and upbeat with the ability to learn quickly • Be able to laugh. At others and most importantly at yourself. Need a sense of humor. You can expect: • A fast-paced, high-growth startup environment where you will gain a career and not just a job. • The company to invest in your personal and professional development. We support your ongoing education and training by reimbursing you for relevant educational courses. • An open office culture, no cabins or cubicles and a place that is looking for your input to help us grow • The support of your teammates to always do better. Own it and win together! • Exposure of International Retail market. Learn about a high growth industry and build critical skill-sets. • Excellent employee referral program. Refer your friends, work with your friends and be awarded for it. • Work along with smart, creative and energetic team who truly believe in ‘working hard and partying harder!’ Educational Requirement: • UG - B.Tech/B.E. – Computer Science/ IT or equivalent • PG - M.Tech - Computer Science/ IT, MCA – Computers or equivalent
Candidates have to take online class from their home or working place.