We are looking to hire Tech Lead who is experienced in virtualization, cloud, distributed systems to work on next generation software defined compute, storage and networking technologies.. You will be involved in architecting, building infrastructure product requiring knowledge of virtualisation, containers, networking and storage. Technical leadership, code reviews, pair programming, writing production level code along with unit tests will be your daily job .
Job Description SigTuple is seeking Systems or Backend Engineers engineers to build a highly scalable platform for running deep learning based medical solutions. The engineering team of SigTuple is responsible for simplifying complex workflows of medical analyses into an elegant software design to make it scalable and distribute it across hundreds of machines. In this role, you'll utilize a combination of systems design experience, network knowledge, troubleshooting skills, and programming to to ingest terabytes of data, analyse it using distributed computing and deliver infrastucture and storage platform services. The ideal candidate is a technical generalist with skills ranging from production systems/network management to software development. Experience with delivering in cloud based platforms is a must. Responsibilities 1. Design and create distributed system software frameworks for processing data in a near real-time manner. 2. Create next generation software for data scientists to analyse and train their AI models on which require you to understand data science and AI. 3. Complete ownership of infrastructure components and automation of operational activities. 4. Ensuring reliability of all systems made by various development teams. Requirements 1. BTech/MTech in any engineering discipline. 2. 3-6 years of experience in an Backend or Systems Role. 3. Experience in management of cloud computing services. Extensive knowledge of any one cloud platform (AWS, Azure, OpenStack etc.) 4. Proficiency with OS and network fundamentals 5. Experience of working with scale is a must 6. Hands on experience with machine learning will be a plus
The company is a US based early stage startup company by accomplished founders(ex- VxTel, ex-Virident Systems) and is looking at using big data and machine learning techniques for certain high BW, high volume data processing use cases. The technology domain involves development activities around i) big data components - Spark, Hadoop, HDFS ii) ML components - Tensorflow, Spark-MLlib iii) Cloud(AWS) hosted scalable data and control path SW iv) High performance data paths with usage of GPUs v) Algorithms around efficient techniques for data summarization We are looking for strong developers with following skills i) hands on experience on developing large scale AWS hosted SW applications. ii) experience developing & debugging distributed applications on big data stack - Spark/Spark streaming/Hadoop Map reduce etc.
Work with the Sales team to identify and qualify business opportunities. Identify key customer technical objections and develop the strategy to resolve technical impediments to business transactions. Take responsibility for technical aspects of solutions, including activities such as supporting bid responses, product and solution briefings, proof-of-concept work, and the coordination of supporting technical resources. Work closely with Google Cloud Platform products to demonstrate and prototype integrations in customer/partner environments. Prepare and deliver product messaging in an effort to highlight the G Suite value proposition using techniques including whiteboard and slide presentations, product demonstrations, white papers, trial management and RFI response documents. Deliver recommendations on integration strategies, enterprise architectures, platforms and application infrastructure required to successfully implement a complete solution providing best practice advice to customers to optimize Google Cloud Platform effectiveness.
SMTS – Data Analytics Platform About 7 Innovation Labs Data is changing human lives at the core - we collect so much data about everything, use it to learn many things and apply the learnings in all aspects of our lives. 7 is at the fore front of applying data and machine learning to the world of customer acquisition and customer engagement. Our customer acquisition cloud uses best of ML and AI to get the right audiences and our engagement cloud powers the interactions for best experience. We service Fortune 100 enterprises globally and hundreds of millions of their customers every year. We enable 1.5B customer interactions every year. We work on several challenging problems in the world of data processing, machine learning and use artificial intelligence to power Smart Agents. How do you process millions of events in a stream to derive intelligence? How do you learn from troves of data applying scalable machine learning algorithms? How do you switch the learnings with real time streams to make decisions in sub 300 msec at scale? We work with the best of open source technologies - Akka, Scala, Undertow, Spark, Spark ML, Hadoop, Cassandra, Mongo. Platform scale and real time are in our DNA and we work hard every day to change the game in customer engagement. We believe in empowering smart people to work on larger than life problems with the best of technologies and like-minded people. We are a Pre-IPO Silicon Valley based company with many global brands as our customers – Hilton, eBay, Time Warner Cable, Best Buy, Target, American Express, Capital One and United Airlines. We touch more than 300 M visitors online every month with our technologies. We have one of the best work environments in Bangalore. Eligibility: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or equivalent Job Responsibilities: • Responsibilities include the design and development of various components of the big data platform. • Knowledge in Java/OO technologies, and experience developing in commercial software platforms and/or large scale data infrastructures. • Write efficient and quality code that scales to high volume production quality. • Work closely with multiple product management and engineering teams to lead the design, build and test of the components of the platform. • Research and experiment with emerging technologies and tools related to big data Required Traits: • 4 to 6 years of software development experience using multiple programming languages. • Strong understanding of Data Structures and Algorithms. • Experience in developing large scale J2EE data processing systems/applications. Experience with real time systems is preferred. • Must have experience in core java/Scala Spark, groovy & Angularjs • Should be proficient in MongoDB,Cassandra,Couchbase,mssql & SqlServer • Good to have knowledge of Druid,Postgres & Zeppelin • Experience with a one or more of big data architectures, including OpenStack, Hadoop, Pig, Hive or other big data frameworks • Ability to participate in large scale initiatives and work towards common goals • Excellent oral and written communication, presentation, and analytical skills
We are looking for python developers who have good exposure to distributed systems. This will be an IC role, candidates with excellent design and coding skills are preferred. Must be willing to adapt to new languages as the team is very dynamic in nature. ABOUT THE JOB Is Big Data really big? If you want to explore this area, learn what massive data volumes mean and how internet works, join Distributed Data Engineering team – a small group of elite software engineers that analyze, design and implement system software that brings new functionality, increased reliability, and enhanced scalability to Akamai’s high-performance Distributed Data platform. ABOUT THE TEAM The Distributed Data Engineering team (DDE) develops and operates the networks that process aggregate and store data about every transaction that involves Akamai edge network servers. Data owned by DDE is being consumed for the purposes of customers billing, customer analytics, business decisions support, Akamai’s cost structure management and Akamai’s network management. DDE currently receives over 2PB of data/day and maintains a data store that processes 3 trillion records daily. The product development team within DDE has end-to-end responsibility for the design, development and deployment of the platform components that enable one of the world’s largest cloud-based data systems. ELEVATOR PITCH 3 REASONS WHY A GREAT CANDIDATE SHOULD GET ATTRACTED TO THIS OPPORTUNITY. 1. This role plays a very critical role in performance-critical message brokering subsystem 2. This role demands to analyze, design and implement system software that brings new functionality, increased reliability, and enhanced scalability to Akamai’s high-performance Distributed Data platform. 3. The expectation is to take the ownership of the design of the platform components that enable one of the world’s largest cloud-based data systems. RESPONSIBILITIES • Develop new and enhance existing features for DDE's massively distributed system • Work on performance-critical message brokering subsystem • Work on data collection, processing, and access subsystems • Work on projects that focus on system scalability, performance, and security • Drive feature development from idea inception through design and testing to operational deployment • Follow SW development methodology best practices, including collaboration with QA departments to successfully deploy high-quality new system components BASIC QUALIFICATIONS • BS in Computer Science or equivalent, MS preferred • 6+ years of experience developing SW on Python • 3+ years of experience with Linux and distributed systems • Knowledge of networking principles, including TCP/IP, SSH, SSL and HTTP protocols • Knowledge of software development and design principles • Ability to troubleshoot complex network problems and customer issues DESIRED QUALIFICATIONS • Proven track record of delivering large amounts of high quality, complex code • Highly responsible, motivated, able to work with little supervision • Experience with BigData systems (Hadoop, Spark, Kafka etc.) and principles (Map/Reduce, etc) • Experience with scripting, e.g. Perl, Python, bash and API's such as SOAP and/or RESTful • Experience with DBMS, e.g. PostGRE SQL, MySQL, etc
Company Profile: Livspace is a design and technology first team and building next generation e-commerce platform for home decor and interior designs and employ a combination of data science, algorithms, and industrial design to create unique experiences for homeowners and scale the job of interior designers. Livspace's vision is to make homes beautiful through design, products, and services the world over. The engineering team at Livspace is responsible for building the e-commerce and design platform and we own the entire stack of the engineering infrastructure. As a Tech Lead, work as a Part of our diverse team and If you enjoy working in a dynamic environment to deliver world class mission critical systems, this may be the career opportunity for you. Being a startup, we move fast and constantly ship new features. If fast paced fun environments and complete ownership of web products excites you, you'll be at home We are a well-funded Internet company - backed by top investors (Bessemer Venture Partners , Helion Venture Partners) Key Profiles: CEO: Anuj S: https://www.linkedin.com/in/anujs CTO: Ramakant Sharma: https://in.linkedin.com/in/sharmaramakant 1. PR : https://inc42.com/startups/livspace-growth-story/ 2. PR: https://yourstory.com/2016/07/livspace-design-entrepreneur/ 3. PR: http://www.hindustantimes.com/business/not-flipkart-or-urban-ladder-livspace-to-compete-with-ikea/story-4JJtmWxffUwQUacncf4UxN.html Your Responsibility 1. Design, implement and enhance new components of the Livspace design and content management platform 2. Design new features for the e-commerce properties and front-end products and mobile apps 3. Responsible for writing maintainable/scalable/efficient code to solve business problems 4. Strong design skills involving data modeling and low level class design 4. Maintain engineering infrastructure 5. Participate in all phases of development, from design to implementation, unit testing and release 6. Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome products. 7. Build a web product that users love 8. Unit-test code for robustness, including edge cases, usability, and general reliability. 9. Follow SDLC in agile environment and collaborate with multiple cross functional teams to drive on-time deliveries 10. Mentor young minds and foster team spirit. Who you are: 1. You earned B.Tech or equivalent degree in computer science or related engineering field, with a strong competencies in data structures, algorithms, and software design. 2. You have at least 5-8 years of experience working with large scale web products with 1-2 yrs of experience in team management skills. 3. Experience building large-scale web services. Extensive knowledge of HTTP, Rest API, JSON, PHP or JAVA 4. Sound knowledge and application of algorithms and data structures with space and time complexities 5. Have a penchant for solving complex and interesting problems. 6. Worked in startup like environment with high levels of ownership and commitment 7. Experience working on building user interfaces using JS Frameworks like AngularJS/ReactJS 8. Excellent coding skills – should be able to convert design into code fluently. Good skills to write unit & integration tests with reasonable coverage of code & interfaces. 9. Hands-on experience in eCommerce and CMS applications is a plus. 10. Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amount of data. Experience with multi-threading and concurrency programming
Job Title: Distributed Systems Engineer - SDET Job Location: Pune, India Job Description: Are you looking to put your computer science skills to use? Are you looking to work for one of the hottest start-ups in Silicon Valley? Are you looking to define the next generation data management platform based on Apache Spark? Are you excited by the idea of being a Spark committer? If you answered yes to all of the questions above, we definitely want to talk to you. We are looking to add highly motivated engineers to work as a QE software engineer in our product development team in Pune. We work on cutting edge data management products that transform the way businesses operate. As a distributed systems engineer (if you are good) , you will get to work on defining key elements of our real time analytics platform, including 1. Distributed in memory data management 2. OLTP and OLAP querying in a single platform 3. Approximate Query Processing over large data sets 4. Online machine learning algorithms applied to streaming data sets 5. Streaming and continuous querying Requirements: 1. Experience in testing modern SQL, NewSQL products highly desirable 2. Experience with SQL language, JDBC, end to end testing of databases 3. Hands on Experience in writing SQL queries 4. Experience on database performance benchmarks like TPC-H, TPC-C and TPC-E a plus 5. Prior experience in benchmarking against Cassandra or MemSQL is a big plus 6. You should be able to program either in Java or have some exposure to functional programming in Scala 7. You should care about performance, and by that, we mean performance optimizations in a JVM 8. You should be self motivated and driven to succeed 9. If you are an open source committer on any project, especially an Apache project, you will fit right in 10. Experience working with Spark, SparkSQL, Spark Streaming is a BIG plus 11. Plans & authors Test plans and ensure testability is considered by development in all stages of the life cycle. 12. Plans, schedules and tracks the creations of Test plans / automation scripts using defined methodologies for manual and/or automated tests 13. Work as QE team member in troubleshooting, isolating, reproducing, tracking bugs and verifying fixes. 14. Analyze test results to ensure existing functionality and recommends corrective action. Documents test results, manages and maintains defect & test case databases to assist in process improvement and estimation of future releases. 15. Performs the assessment and planning of test efforts required for automation of new functions/features under development. Influences design changes to improve quality and feature testability. 16. If you have solved big complex problems, we want to talk to you 17. If you are a math geek, with a background in statistics, mathematics and you know what a linear regression is, this just might be the place for you 18. Exposure to stream data processing Storm, Samza is a plus Open source contributors: Send us your Github id Product: SnappyData is a new real-time analytics platform that combines probabilistic data structures, approximate query processing and in memory distributed data management to deliver powerful analytic querying and alerting capabilities on Apache Spark at a fraction of the cost of traditional big data analytics platforms. SnappyData fuses the Spark computational engine with a highly available, multi-tenanted in-memory database to execute OLAP and OLTP queries on streaming data. Further, SnappyData can store data in a variety of synopsis data structures to provide extremely fast responses on less resources. Finally, applications can either submit Spark programs or connect using JDBC/ODBC to run interactive or continuous SQL queries. Skills: 1. Distributed Systems, 2. Scala, 3. Apache Spark, 4. Spark SQL, 5. Spark Streaming, 6. Java, 7. YARN/Mesos What's in it for you: 1. Cutting edge work that is ultra meaningful 2. Colleagues who are the best of the best 3. Meaningful startup equity 4. Competitive base salary 5. Full benefits 6. Casual, Fun Office Company Overview: SnappyData is a Silicon Valley funded startup founded by engineers who pioneered the distributed in memory data business. It is advised by some of the legends of the computing industry who have been instrumental in creating multiple disruptions that have defined computing over the past 40 years. The engineering team that powers SnappyData built GemFire, one of the industry leading in memory data grids, which is used worldwide in mission critical applications ranging from finance to retail.
You’re Part: Communication Drive discussions to create/improve product, process and technology Coding Thinking tools, creating tools Build abstractions and contracts with separation of concerns for a larger scope Rapid proto-typing Robust and scalable web-based applications Design Do high level design with guidance; Functional modelling, break-down of a module Thinking platforms & reuse Architecture Do incremental changes to architecture: impact analysis of the same Do performance tuning and improvements in large scale distributed systems Org Development Mentor young minds and foster team spirit You’re Array (Nice to Haves): The farsightedness it takes to look at business problems critically from more than one perspective. The capability to craft object-oriented models and design data structures, implement business logic and data models with suitable class design. Ability to break-down larger/fuzzier problems into smaller ones in the scope of the product Sound soft skills to gel with colleagues from other teams in order to harness the development process. An understanding of the industry’s coding standards and an ability to create appropriate technical documentation. You (Must Haves): Extensive and expert programming experience in any one programming language (strong OO skills preferred). Deep experience in at least one Object Oriented programming language (Java/C/C++, Ruby, Clojure, Scala,and SQL) A solid foundation in computer science, with strong competencies in data structures, algorithms, and software design. Have a penchant for solving complex and interesting problems Worked in startup like environment with high levels of ownership and commitment BTech, MTech, or PhD in Computer Science or related technical discipline (or equivalent). Excellent coding skills – should be able to convert design into code fluently Good skills to write unit & integration tests with reasonable coverage of code & interfaces - TDD is a plus Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amount of data. Experience with multi-threading and concurrency programming Above exposure in the art of writing codes and solving problems for large scale. You’re Cheers! Apart from all the general benefits of best in industry compensation, equity, healthcare etc , Flipkart prides in calling out the big hand for you to be Great Work, Great People and Great environment. We call ourselves an incubator for engineers where you get all the optimal conditions to do and experience your best.
Are you seeking for high paced working environment, rapid development, and more autonomy with transparent and open culture that thrives on hustle?
COMPANY PROFILE - Manthan (www.manthan.com) Manthan serves as the Chief Analytics Officer for global consumer industries. With its portfolio of analytic products and solutions Manthan helps its customers derive competitive advantage through data-driven decisions. Manthan’s solutions are architected with deep industry specificity, bringing together analytics, technology and industry-practices to place sophisticated, yet intuitive analytical capability. Headquartered in Bangalore with offices in US, UK, Singapore and Brazil, Manthan’s client footprint spans 20 countries. Manthan serves the analytics needs of Retail, CPG, Pharma and Market Research industries. Manthan is venture-funded with Fidelity Private Equity, Norwest Venture Partners and Temasek, on its board.