About 7 Innovation Labs Data is changing human lives at the core - we collect so much data about everything, use it to learn many things and apply the learnings in all aspects of our lives. 7 is at the fore front of applying data and machine learning to the world of customer acquisition and customer engagement. Our customer acquisition cloud uses best of ML and AI to get the right audiences and our engagement cloud powers the interactions for best experience. We service Fortune 100 enterprises globally and hundreds of millions of their customers every year. We enable 1.5B customer interactions every year. We work on several challenging problems in the world of data processing, machine learning and use artificial intelligence to power Smart Agents. How do you process millions of events in a stream to derive intelligence? How do you learn from troves of data applying scalable machine learning algorithms? How do you switch the learnings with real time streams to make decisions in sub 300 msec at scale? We work with the best of open source technologies - Akka, Scala, Undertow, Spark, Spark ML, Hadoop, Cassandra, Mongo. Platform scale and real time are in our DNA and we work hard every day to change the game in customer engagement. We believe in empowering smart people to work on larger than life problems with the best of technologies and like-minded people. We are a Pre-IPO Silicon Valley based company with many global brands as our customers – Hilton, eBay, Time Warner Cable, Best Buy, Target, American Express, Capital One and United Airlines. We touch more than 300 M visitors online every month with our technologies. We have one of the best work environments in Bangalore. Opportunity Principal Member of Technical Staff is one of our distinguished individual contributors who can takes on problems of size and scale. You will be responsible for working with a team of smart and highly capable engineers to design a solution and work closely in the implementation, testing, deployment and runtime operation 24x7 with 99.99% uptime. You will have to demonstrate your technical mettle and influence and inspire the engineers to build things right. You will be working on the problems in one or more areas of : Data Collection: Horizontally scalable platform to collect Billions of events from around the world in as little as 50 msec. Intelligent Campaign Engines: Make real time decisions using the events on best experience to display in as little as 200 msec. Real time Stream Computation: Compute thousands of metrics on the incoming billions of events to make it available for decisioning and analytics. Data Pipeline: Scaleable data transport layer using Apache Kafka running across hundreds of servers and transporting billions of events in real time. Data Analysis: Distributed OLAP engines on Hadoop or Spark to provide real time analytics on the data Large scale Machine Learning: Supervised and Unsupervised learning on Hadoop and Spark using the best of open source frameworks. In this role, you will be presenting your work at Meetup events, Conferences worldwide and contributing to Open Source. You will be helping with attracting the right talent and grooming the engineers to shape up to be the best. Must Have Engineering • Strong foundation in Computer Science - through education and/or experience - Data Structures, Algorithms, Design thinking, Optimizations. • Should have been an outstanding technical contributor with accomplishments include building products and platforms of scale. • Outstanding technical acumen and deep understanding of problems with distributed systems and scale with strong orientation towards open source. • Experience building platforms that have 99.99% uptime requirements and have scale. • Experience in working in a fast paced environment with attention to detail and incremental delivery through automation. • Loves to code than to talk. • 10+ years of experience in building software systems or able to demonstrate such maturity without the years under the belt. Behavioral • Loads of energy and can-do attitude to take BIG problems by their horns and solve them. • Entrepreneurial spirit to conceive ideas, turn challenges into opportunities and build products. • Ability to inspire other engineers to do the unimagined and go beyond their comfort lines. • Be a role model for upcoming engineers in the organization especially new college grads. Technology background • Strong preference with experience in open source technologies: working with various Java application servers or Scala • Experience in deploying web applications, services that run across thousands of servers globally with very low latency and high uptime.
10+ years of industry experience, including at least 5 years in a development role and at least 2 years in a technical management role. Good experience working on scalable products.
We are looking for python developers who have good exposure to distributed systems. This will be an IC role, candidates with excellent design and coding skills are preferred. Must be willing to adapt to new languages as the team is very dynamic in nature. ABOUT THE JOB Is Big Data really big? If you want to explore this area, learn what massive data volumes mean and how internet works, join Distributed Data Engineering team – a small group of elite software engineers that analyze, design and implement system software that brings new functionality, increased reliability, and enhanced scalability to Akamai’s high-performance Distributed Data platform. ABOUT THE TEAM The Distributed Data Engineering team (DDE) develops and operates the networks that process aggregate and store data about every transaction that involves Akamai edge network servers. Data owned by DDE is being consumed for the purposes of customers billing, customer analytics, business decisions support, Akamai’s cost structure management and Akamai’s network management. DDE currently receives over 2PB of data/day and maintains a data store that processes 3 trillion records daily. The product development team within DDE has end-to-end responsibility for the design, development and deployment of the platform components that enable one of the world’s largest cloud-based data systems. ELEVATOR PITCH 3 REASONS WHY A GREAT CANDIDATE SHOULD GET ATTRACTED TO THIS OPPORTUNITY. 1. This role plays a very critical role in performance-critical message brokering subsystem 2. This role demands to analyze, design and implement system software that brings new functionality, increased reliability, and enhanced scalability to Akamai’s high-performance Distributed Data platform. 3. The expectation is to take the ownership of the design of the platform components that enable one of the world’s largest cloud-based data systems. RESPONSIBILITIES • Develop new and enhance existing features for DDE's massively distributed system • Work on performance-critical message brokering subsystem • Work on data collection, processing, and access subsystems • Work on projects that focus on system scalability, performance, and security • Drive feature development from idea inception through design and testing to operational deployment • Follow SW development methodology best practices, including collaboration with QA departments to successfully deploy high-quality new system components BASIC QUALIFICATIONS • BS in Computer Science or equivalent, MS preferred • 6+ years of experience developing SW on Python • 3+ years of experience with Linux and distributed systems • Knowledge of networking principles, including TCP/IP, SSH, SSL and HTTP protocols • Knowledge of software development and design principles • Ability to troubleshoot complex network problems and customer issues DESIRED QUALIFICATIONS • Proven track record of delivering large amounts of high quality, complex code • Highly responsible, motivated, able to work with little supervision • Experience with BigData systems (Hadoop, Spark, Kafka etc.) and principles (Map/Reduce, etc) • Experience with scripting, e.g. Perl, Python, bash and API's such as SOAP and/or RESTful • Experience with DBMS, e.g. PostGRE SQL, MySQL, etc
Company Profile: Livspace is a design and technology first team and building next generation e-commerce platform for home decor and interior designs and employ a combination of data science, algorithms, and industrial design to create unique experiences for homeowners and scale the job of interior designers. Livspace's vision is to make homes beautiful through design, products, and services the world over. The engineering team at Livspace is responsible for building the e-commerce and design platform and we own the entire stack of the engineering infrastructure. As a Tech Lead, work as a Part of our diverse team and If you enjoy working in a dynamic environment to deliver world class mission critical systems, this may be the career opportunity for you. Being a startup, we move fast and constantly ship new features. If fast paced fun environments and complete ownership of web products excites you, you'll be at home We are a well-funded Internet company - backed by top investors (Bessemer Venture Partners , Helion Venture Partners) Key Profiles: CEO: Anuj S: https://www.linkedin.com/in/anujs CTO: Ramakant Sharma: https://in.linkedin.com/in/sharmaramakant 1. PR : https://inc42.com/startups/livspace-growth-story/ 2. PR: https://yourstory.com/2016/07/livspace-design-entrepreneur/ 3. PR: http://www.hindustantimes.com/business/not-flipkart-or-urban-ladder-livspace-to-compete-with-ikea/story-4JJtmWxffUwQUacncf4UxN.html Your Responsibility 1. Design, implement and enhance new components of the Livspace design and content management platform 2. Design new features for the e-commerce properties and front-end products and mobile apps 3. Responsible for writing maintainable/scalable/efficient code to solve business problems 4. Strong design skills involving data modeling and low level class design 4. Maintain engineering infrastructure 5. Participate in all phases of development, from design to implementation, unit testing and release 6. Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome products. 7. Build a web product that users love 8. Unit-test code for robustness, including edge cases, usability, and general reliability. 9. Follow SDLC in agile environment and collaborate with multiple cross functional teams to drive on-time deliveries 10. Mentor young minds and foster team spirit. Who you are: 1. You earned B.Tech or equivalent degree in computer science or related engineering field, with a strong competencies in data structures, algorithms, and software design. 2. You have at least 5-8 years of experience working with large scale web products with 1-2 yrs of experience in team management skills. 3. Experience building large-scale web services. Extensive knowledge of HTTP, Rest API, JSON, PHP or JAVA 4. Sound knowledge and application of algorithms and data structures with space and time complexities 5. Have a penchant for solving complex and interesting problems. 6. Worked in startup like environment with high levels of ownership and commitment 7. Experience working on building user interfaces using JS Frameworks like AngularJS/ReactJS 8. Excellent coding skills – should be able to convert design into code fluently. Good skills to write unit & integration tests with reasonable coverage of code & interfaces. 9. Hands-on experience in eCommerce and CMS applications is a plus. 10. Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amount of data. Experience with multi-threading and concurrency programming
Job Title: Distributed Systems Engineer - SDET Job Location: Pune, India Job Description: Are you looking to put your computer science skills to use? Are you looking to work for one of the hottest start-ups in Silicon Valley? Are you looking to define the next generation data management platform based on Apache Spark? Are you excited by the idea of being a Spark committer? If you answered yes to all of the questions above, we definitely want to talk to you. We are looking to add highly motivated engineers to work as a QE software engineer in our product development team in Pune. We work on cutting edge data management products that transform the way businesses operate. As a distributed systems engineer (if you are good) , you will get to work on defining key elements of our real time analytics platform, including 1. Distributed in memory data management 2. OLTP and OLAP querying in a single platform 3. Approximate Query Processing over large data sets 4. Online machine learning algorithms applied to streaming data sets 5. Streaming and continuous querying Requirements: 1. Experience in testing modern SQL, NewSQL products highly desirable 2. Experience with SQL language, JDBC, end to end testing of databases 3. Hands on Experience in writing SQL queries 4. Experience on database performance benchmarks like TPC-H, TPC-C and TPC-E a plus 5. Prior experience in benchmarking against Cassandra or MemSQL is a big plus 6. You should be able to program either in Java or have some exposure to functional programming in Scala 7. You should care about performance, and by that, we mean performance optimizations in a JVM 8. You should be self motivated and driven to succeed 9. If you are an open source committer on any project, especially an Apache project, you will fit right in 10. Experience working with Spark, SparkSQL, Spark Streaming is a BIG plus 11. Plans & authors Test plans and ensure testability is considered by development in all stages of the life cycle. 12. Plans, schedules and tracks the creations of Test plans / automation scripts using defined methodologies for manual and/or automated tests 13. Work as QE team member in troubleshooting, isolating, reproducing, tracking bugs and verifying fixes. 14. Analyze test results to ensure existing functionality and recommends corrective action. Documents test results, manages and maintains defect & test case databases to assist in process improvement and estimation of future releases. 15. Performs the assessment and planning of test efforts required for automation of new functions/features under development. Influences design changes to improve quality and feature testability. 16. If you have solved big complex problems, we want to talk to you 17. If you are a math geek, with a background in statistics, mathematics and you know what a linear regression is, this just might be the place for you 18. Exposure to stream data processing Storm, Samza is a plus Open source contributors: Send us your Github id Product: SnappyData is a new real-time analytics platform that combines probabilistic data structures, approximate query processing and in memory distributed data management to deliver powerful analytic querying and alerting capabilities on Apache Spark at a fraction of the cost of traditional big data analytics platforms. SnappyData fuses the Spark computational engine with a highly available, multi-tenanted in-memory database to execute OLAP and OLTP queries on streaming data. Further, SnappyData can store data in a variety of synopsis data structures to provide extremely fast responses on less resources. Finally, applications can either submit Spark programs or connect using JDBC/ODBC to run interactive or continuous SQL queries. Skills: 1. Distributed Systems, 2. Scala, 3. Apache Spark, 4. Spark SQL, 5. Spark Streaming, 6. Java, 7. YARN/Mesos What's in it for you: 1. Cutting edge work that is ultra meaningful 2. Colleagues who are the best of the best 3. Meaningful startup equity 4. Competitive base salary 5. Full benefits 6. Casual, Fun Office Company Overview: SnappyData is a Silicon Valley funded startup founded by engineers who pioneered the distributed in memory data business. It is advised by some of the legends of the computing industry who have been instrumental in creating multiple disruptions that have defined computing over the past 40 years. The engineering team that powers SnappyData built GemFire, one of the industry leading in memory data grids, which is used worldwide in mission critical applications ranging from finance to retail.
You’re Part: Communication Drive discussions to create/improve product, process and technology Coding Thinking tools, creating tools Build abstractions and contracts with separation of concerns for a larger scope Rapid proto-typing Robust and scalable web-based applications Design Do high level design with guidance; Functional modelling, break-down of a module Thinking platforms & reuse Architecture Do incremental changes to architecture: impact analysis of the same Do performance tuning and improvements in large scale distributed systems Org Development Mentor young minds and foster team spirit You’re Array (Nice to Haves): The farsightedness it takes to look at business problems critically from more than one perspective. The capability to craft object-oriented models and design data structures, implement business logic and data models with suitable class design. Ability to break-down larger/fuzzier problems into smaller ones in the scope of the product Sound soft skills to gel with colleagues from other teams in order to harness the development process. An understanding of the industry’s coding standards and an ability to create appropriate technical documentation. You (Must Haves): Extensive and expert programming experience in any one programming language (strong OO skills preferred). Deep experience in at least one Object Oriented programming language (Java/C/C++, Ruby, Clojure, Scala,and SQL) A solid foundation in computer science, with strong competencies in data structures, algorithms, and software design. Have a penchant for solving complex and interesting problems Worked in startup like environment with high levels of ownership and commitment BTech, MTech, or PhD in Computer Science or related technical discipline (or equivalent). Excellent coding skills – should be able to convert design into code fluently Good skills to write unit & integration tests with reasonable coverage of code & interfaces - TDD is a plus Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amount of data. Experience with multi-threading and concurrency programming Above exposure in the art of writing codes and solving problems for large scale. You’re Cheers! Apart from all the general benefits of best in industry compensation, equity, healthcare etc , Flipkart prides in calling out the big hand for you to be Great Work, Great People and Great environment. We call ourselves an incubator for engineers where you get all the optimal conditions to do and experience your best.
Are you seeking for high paced working environment, rapid development, and more autonomy with transparent and open culture that thrives on hustle?
COMPANY PROFILE - Manthan (www.manthan.com) Manthan serves as the Chief Analytics Officer for global consumer industries. With its portfolio of analytic products and solutions Manthan helps its customers derive competitive advantage through data-driven decisions. Manthan’s solutions are architected with deep industry specificity, bringing together analytics, technology and industry-practices to place sophisticated, yet intuitive analytical capability. Headquartered in Bangalore with offices in US, UK, Singapore and Brazil, Manthan’s client footprint spans 20 countries. Manthan serves the analytics needs of Retail, CPG, Pharma and Market Research industries. Manthan is venture-funded with Fidelity Private Equity, Norwest Venture Partners and Temasek, on its board.