About Loonycorn: Founded by Google, Stanford and Columbia alumni, Loonycorn is a leading studio for e-learning content on machine learning, cloud computing, blockchain and other emerging technologies. About the Role: We are looking for folks to make technical content similar to what you'd find at the links below: https://www.pluralsight.com/search?q=janani+ravi https://www.pluralsight.com/search?q=vitthal+srinivasan https://www.udemy.com/u/janani-ravi-2/ This involves: - learning a new technology from scratch - building realistic, pretty complicated programs in that technology - creating clear, creative slides or animations explaining the concepts behind that technology - combining these into a polished video that you will voice-over What is important to us: - Grit - Perseverance in working on hard problems. Technical video-making is difficult and detail-oriented (that's why it is a highly profitable business) - Craftsmanship - Our video-making is quite artisanal - lots of hard work and small details. There are many excellent roles where doing smart 80-20 trade-offs is the way to succeed - this is not one of them. - Clarity - Talking and thinking in direct, clear ways is super-important in what we do. Folks who use a lot of jargon or cliches, or who over-complicate technical problems will not enjoy the work. - Creativity - Analogies, technical metaphors and other artistic elements are an important part of what we do. What is not all that important to us: - Your school or labels: Perfectly fine whatever college or company you are applying from - English vocabulary or pronunciation: You don't need to 'talk well' or be flashy to make good content
Customer Engineer : Pre Sales Consultants Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, and IoT tailored solutions to accelerate business transformation.We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries. If you like to deal with data and large amount of data, you like data wrangling, developing data pipelines at scale working with streaming data, you are equally at ease with structured and unstructured datasets then this is the job for you. Role and Responsibility : Work as part of a technical sales team, identifying and qualifying business opportunities, identifying key customer technical objections, and developing a strategy to resolve technical roadblocks. Travel to customer sites, conferences and other related events as required. Manage the technical aspects of developing customer solutions. Oversee activities, including product and solution briefings, developing system architectures, supporting bid responses, proof-of-concept work, and coordinating supporting technical resources. Work hands-on with Google Cloud products to demonstrate and prototype integrations in customer/partner environments. Prepare and deliver product messaging in an effort to highlight the Google Cloud value proposition via whiteboard and slide presentations, product demonstrations, white papers. Navigate complex customer organizations and build trusted relationships with key external and internal stakeholders. Deliver recommendations on integration strategies, enterprise architectures, platforms and application infrastructure required to successfully implement a complete solution providing best practice advice to customers to optimize Google Cloud Platform effectiveness. Experience in Drafting SOW/Proposals Handle End to End Machine Learning, AI and Data Analytics Workshops Handle customer through Architecture on Cloud Vs On-prem Required Qualification: Bachelor's degree in Computer Science with more than 5+ years of relevant experience. Experience serving in the capacity of a Technical (Pre)Sales Engineer in a Cloud Computing or B2B software company in a customer-facing role. Be proficient with Virtualization and Networking technology. Experience with Storage technology. The work location is based out of Bangalore.
Job brief : - We are looking for a Natural Language Processing Engineer to help us improve our NLP products and create new NLP applications.- This is a highly technical role and an excellent academic and/or industry background (with specialization in NLP) with Master or Ph.D. is preferred.Requirements :- Strong academic and/or industrial exposure to NLP - Experience in applying different NLP techniques to problems such as text classification, text summarization, question & answering, information retrieval, knowledge extraction, and conversational bots design potentially with both traditional & Deep Learning Techniques- NLP Skills/Tools: NLP, HMM, MEMM, P/LSA, CRF, LDA, Semantic Hashing, Word2Vec, Seq2Seq, spaCy, Nltk, Gensim, CoreNLP, NLU, NLG etc.,- Ability to design & develop practical analytical approach keeping the context of data quality & availability, feasibility, scalability, turnaround time aspects- Overall 4-8 years of professional NLP/chatbot development experience - Proficiency in scraping data from an external source and come up architectures to store and organize the information for insight generation- Experience in analyzing large amounts of user-generated content and process data in large-scale environments using cloud infrastructure is desirable- Contributions to open source software projects is an added advantage- Analyze and model structured data using advanced statistical methods and implement algorithmsSkills & Qualification :- Graduate/Post-Graduate/M.S in Computer Science/Mathematics/ Statistics/Machine Learning/NLP or allied fields.- Strong ability to implement modules which are reusable, scalable & performance efficient modules- Prior experiences with product development team & designed product features/solutions is a plusBehavioral Traits :- Ability to work in a demanding environment with limited guidance- Ability to come up with working solutions for customers while innovating for ideal solutions in a time-bound manner- Think through the design and assess trade-offs related to quality, time to implement, scalability & reusability of components developed- Ability to explore new tools/technologies to come up with potential solutions- Strong Analytical & communication skills.
Locus is a global decision- making platform in the supply chain that uses deep learning and proprietary algorithms to provide route optimization, real-time tracking, insights and analytics, beat optimization, efficient warehouse management, vehicle allocation and utilization, intuitive 3D packing and measurement of packages. Locus automates human decisions required to transport a package or a person, between any two points on earth, delivering gains along efficiency, consistency, and transparency in operations. Locus, which has achieved a peak of 1 million orders processed in a day (200,000 orders an hour) and is trained & tested on over 100 million+ order deliveries, works in 75 cities across the globe. Locus works with several large-scale market leaders like Urban Ladder, Tata Group of Companies, Droplet, Licious, Rollick, Lenskart, other global FMCG, pharma, e-commerce, 3PL and logistics conglomerates. Locus is backed by some of the biggest names in the market and recently raised $4 Mn in a pre-Series B round. Earlier, In 2016, Locus raised $2.75 Mn (INR 18.3 Cr) in a Series A funding. Locus was started by Nishith Rastogi and Geet Garg, two ex-Amazon engineers on a mission to democratize logistics intelligence for businesses across industries. Nishith was profiled by Forbes Asia in their ’30 Under 30’ 2018 list. Geet, on the other hand, holds a dual degree (BTech and MTech) in Computer Science and Engineering from the Indian Institute of Technology. Our team constitutes of engineers from Indian Institute of Technology and Birla Institute of Technology & Science- Pilani, and Data Scientists with PhDs from Carnegie Mellon University and Tata Institute of Fundamental Research. Our multifaced product and business team is led by senior members from Barclays, Google & Goldman Sachs with immense operational execution experience. Office Location - http://where.locus.sh Responsibilities: Ownership to design, implement and maintain the infrastructure for the Locus platform hosted across multiple hosting providers like AWS and Azure across the world Develop and maintain (CI)/(CD) process for enterprise applications, using various tools like Cloudwatch, NewRelic, Nagios etc. Automate any manual process across deployment, monitoring or audit related functions Implement monitoring solutions to assess the health of the infrastructure Responsible for all audit trails along with relevant alarms to detect any anomalies Ensure we are releasing code safely, securely and frequently Work with developers to optimise applications for performance and scale Build and mentor the team at Locus. We currently have 1 DevOps engineer. We are looking at a team of 4 + 1 lead. You will be the lead Must Haves: 4+ years of experience as a DevOps engineer, in a product company Bachelors Degree in Computer Science or equivalent Expertise in Linux System Admin and Bash Scripting Experience with configuration management tools such as Ansible, Chef, Fabric, Puppet or SaltStack Hands-on experience in build and administer VMs and Containers using tools such as Docker, Vagrant, Kubernetes Strong scripting skills Strong experience in designing Cloud Infrastructure at AWS/Azure. Experience in Infrastructure as Code services like CloudFormation, Terraform etc. Experience in infrastructure as Code services like CloudFormation, Terraform etc is a huge bonus Worked on monitoring tools like Cloudwatch, NewRelic, Nagios etc Perks: Healthy catered meals at office You decide your own Work From Home (WFH) and Out Of Office (OOO) Pet-friendly - bring your pets to the office any day. Locus family already has a Rottweiler and a Beagle
Locus is a state-of-the-art technology platform that automates and optimises your logistic operations to improve efficiency, consistency & transparency. Founded in 2015, by IIT & BITS alumnus, Locus uses proprietary algorithms to automate human decisions involved in transporting a package or person between two locations. A sophisticated route optimization engine that factors real world problems like incomplete addresses, unpredictable traffic, legal route restrictions, time windows & more, to reduce human dependency and business cost. Use-cases exists across industries varying from first & last mile to intra-city movement, as well as primary distribution across cities. Additionally, the simulation engine can optimize your network, plan efficient sales beat for your workforce factoring in routing, skill level, past familiarity, fairness & much more. Several large scale market leaders in India like BigBasket, Unilever, Marico, Urban Ladder, Gati, Tata Group & others are productions customers of Locus. Locus works on a unique pay-as-you-go, gain sharing model where the customer pays only on usage and shares a portion of the savings accrued. Locus was started by two ex-Amazon engineers on a mission to democratize logistics intelligence for businesses across industries. Our team constitutes of engineers from Indian Institute of Technology and Birla Institute of Technology & Science- Pilani, and Data Scientists with PhDs from Carnegie Mellon University and Tata Institute of Fundamental Research. Our multifaced product and business team is led by senior members from Barclays, Google & Goldman Sachs with immense operational execution experience. Office Location - http://where.locus.sh Job Description :- Design, implement and maintain the infrastructure for the Locus platform hosted across multiple hosting providers like AWS and Azure across the world. Automate any manual process across deployment, monitoring or audit related functions.Implement monitoring solutions to assess the health of the infrastructure. Ensure that audit trails are in place along with relevant alarms to detect any anomalies. Ensure we are releasing code safely, securely and frequentlyWork with developers to optimize applications for performance and scale. Requirements: 1 or more years of experience as a DevOps Engineer. Strong scripting skills. Strong Experience in designing Cloud Infrastructure at AWS/Azure.Experience in Infrastructure as Code services like CloudFormation, Terraform etc. Experience with CI/CD systems. Experience with monitoring tools like Cloudwatch, NewRelic, Nagios etc Perks : 24 hours free food Unaccounted leaves - put OOO on team calendar & take off whenever you want Pet friendly - bring your pets to office - we have a Rotweiler (Onyx) & a Beagle (Bucky) in office most of the days Project Skynet : We are internally working on a tool that is AI based & help finding anomaly in the production systems. This would help us develop smart alarms that go beyond the regular Cloudwatch alarms. If this project becomes a success, we believe the companies can cut their DevOps workforce by half. Come be part of the journey if you like to work on interesting problems.
Cloud Architect Should have 15+ years overall IT industry experience. 6+ years of architecture, design, implementation of highly distributed applications (i.e. having an architectural sense for ensuring availability, reliability, etc.). Deep understanding of cloud computing technologies, business drivers, emerging computing trends, and deployment. Experience with one or more NoSQL like (MongoDB, Cassandra). Deep technical experience in one or more of the following areas: Software design or development, Cloud Application Design, Mobility, PaaS, Media Services, CDN. Working knowledge with AGILE development, SCRUM and Application Lifecycle Management (ALM). Deep programming skills in one or more languages- C#/C++/Java/ Python. Good to have · Experience in deploying applications in CaaS – Docker, Kubernetes platforms is a huge plus. · Good level experience designing solutions using advanced patterns such as microservices / event-driven architectures. · CI/CD delivery using code management, configuration management and automation tools such as Git, VSTS, Ansible, DSC, Puppet, Chef, Salt, Jenkins, Maven, etc
Requirement: • Hands on complete Design / Implementation of CI/CD Processes with Git • Hands-on in-depth knowledge of GCP like VPNs, VPCs, Storage, Tagging and monitoring, alerting of resources • Experience in development of access control lists • Hands on experience in containerization (Docker, Kubernetes) • Experience in with automation using any scripting language • Hands on experience in configure and manage data sources like MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc. • Experience in monitoring services via tools such as Graylog, Syslog, NewRelic, Prometheus, Nagios, etc. Roles & Responsibilities: • Architecting, designing, implementing and supporting of projects using cloud technologies • Production support responsibilities for software and infrastructure fixes • Day to day operational support of continuous integration, release and source control tooling • Work with software, infrastructure, network engineers and DBAs to solve productivity challenges, drive efficiency, automate and streamline environment builds • Co-own monitoring & alert configuration to detect, triage and resolve issues quickly • Build tools to enhance production triage and improve time to detect issues You must be: • A team player who likes to work hard and play harder, have excellent interpersonal, organizational and time-management skills. • Able to think strategically and analytically to effectively complete assigned work within given timelines. • Someone who possesses excellent written and oral communication skills and have an attention to detail • A person with an ability to multi-task on multiple projects and tasks at the same time • A person who gives importance to attention to detail and be highly organized • Positive and upbeat with the ability to learn quickly • Be able to laugh. At others and most importantly at yourself. Need a sense of humor. You can expect: • A fast-paced, high-growth startup environment where you will gain a career and not just a job. • The company to invest in your personal and professional development. We support your ongoing education and training by reimbursing you for relevant educational courses. • An open office culture, no cabins or cubicles and a place that is looking for your input to help us grow • The support of your teammates to always do better. Own it and win together! • Exposure of International Retail market. Learn about a high growth industry and build critical skill-sets. • Excellent employee referral program. Refer your friends, work with your friends and be awarded for it. • Work along with smart, creative and energetic team who truly believe in ‘working hard and partying harder!’ Educational Requirement: • UG - B.Tech/B.E. – Computer Science/ IT or equivalent • PG - M.Tech - Computer Science/ IT, MCA – Computers or equivalent