Responsibilities Exp 3~5 years Build up a strong and scalable crawler system for leveraging external user & content data source from Facebook, Youtube and others internet products or service. Getting top trending keywords & topic from social media. Design and build initial version of the real-time analytics product from Machine Learning Models to recommend video contents in real time to 10M+ User Profiles independently. Architect and build Big Data infrastructures using Java, Kafka, Storm, Hadoop, Spark and other related frameworks, experience with Elastic search is a plus Excellent Analytical, Research and Problem Solving skills, in-depth knowledge of Data Structure Desired Skills and Experience B.S./M.S. degree in computer science, mathematics, statistics or a similar quantitative field with good college background 3+ years of work experience in relevant field (Data Engineer, R&D engineer, etc) Experience in Machine Learning and Prediction & Recommendation techniques Experience with Hadoop/MapReduce/Elastic-Stack/ELK and Big Data querying tools, such as Pig, Hive, and Impala Proficiency in a major programming language (e.g. Java/C/Scala) and/or a scripting language (Python) Experience with one or more NoSQL databases, such as MongoDB, Cassandra, HBase, Hive, Vertica, Elastic Search Experience with cloud solutions/AWS, strong knowledge in Linux and Apache Experience with any map-reduce SPARK/EMR Experience in building reports and/or data visualization Strong communication skills and ability to discuss the product with PMs and business owners
• You will be part of the EdGE engineering team and lead the team of system administration to manage and maintain the infrastructure system. • At EdGE, things move quickly. You need to adapt by managing priorities, improving work efficiency (such as automating repetitive work etc.) while taking initiatives and getting things done. • You will be working closely with the software development team to understand the overall system inside out and come up with solutions such as server tuning, application tuning, monitoring metrics and other interesting stuff. Do apply if you meet most of the following requirements. • Proficiency in building Linux systems (Ubuntu, Debian, CentOS, etc.) • Experience with cloud/virtualization technology (AWS, Rackspace, OpenStack, VMware etc.) • SCM Provisioning/Orchestration Tools (Chef, Puppet, Docker etc.) • Experience with system and application monitoring tools (NewRelic, Nagios, Logstash, etc.) • Expertise in more than one scripting/programming language (Bash, Python, Java, Ruby, etc.) • Well-above-average written and verbal communication skills • 4+ years of professional experience in a DevOps capacity Good to have: • Database administration (preferably NoSQL) experience (optimization, replication, backups, etc.) • Knowledge of key-value stores, caching, search, messaging queues (ElasticSearch, RabbitMQ, Memcached, Redis, etc.) • Knowledge of CI tools like Jenkins and working knowledge of Git
Our client is a Gurgaon based software startup in Artificial Intelligence, Big Data and Data Science domain. Our clients, have built a data scientist, a virtual one. It's an Artificial Intelligence powered agent who can learn & work 24x7 to deliver business insights that matter the most. ● Working on a Unique concept ● Recognized by Indian Angel Network (IAN), the biggest network in India along with DIPP (Govt. Of India) and NASSCOM. ● Winner of $120K credits as part of Microsoft Bizspark Plus program ● Raised two professional rounds of funding ● Alumni of premier institutes (like IIT Bombay, IIT Delhi) on our advisory panel ● The current hiring is for the core team expansion. It will be under 10. I.e. candidate will be part of the core founding team and will get tremendous exposure. Core founding team focuses on invention and gets huge opportunities to file patents. ● Days: Monday to Saturdays ● One weekday off per month as per employee's choice. This is on top of earned/privilege leaves and bank holidays. ● Location: Gurgaon ● Line Management: Directly reporting to CxO Team Position: - Sr. Product Developer Job Description: Sr. Product Developer will be part of the client’s Lab. As a Sr. Product Developer, the candidate will be working very closely with Product Management, AI Research and Data Scientist team. Key responsibilities include, ● Design and Development of product in Big Data architecture and framework - Apache Spark, HDFS, Flume, Kafka with Scala/Java ● Development of Machine Learning Algorithms in Apache Spark framework ● Design and Development of integration connectors with external data sources like RDBMS (MySQL, Oracle etc.) and other products ● Lead, mentor and coach team members Skills ● 4+ Years of product development experience in Scala / Java ● Must possess in-depth knowledge of core design/architectural concerns like design patterns, performance, code re-usability and quality ● Should have good understanding of RDBMS and EA diagrams ● Experience on development (or upgrades) of Apache Spark libraries and contributions to Apache Spark or other open source frameworks is an added advantage ● Understanding on data security or statistics (like probability distribution etc.) is an added advantage ● Ability to take initiatives, self-motivated and a learning attitude is must. Experience & Qualification Required - B.Tech from Tier 1.5 (NIT/IIIT/DCE) with 4+ years of Experience
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.
Responsibilities Leverage your Java/Python/Node/Go skill in coding and architecting, Hadoop is a plus Design and build REST API for the Mobile and Web Apps Architect Crawler/Data-Analysis/Notification Engines, if needed, video stream architect will be a challenge! Take individual ownership of a project from start to finish delivering on time and with high quality code that is thoroughly tested Lead the backend team to provide feature and delivery and cooperate with apps team Data Security, DevOps, DBMS & Scaling What Are We Looking For In You Have 5+ yrs of experience with building backend applications, Web Apps & Analytics, a solid foundation in computer science, with strong competencies in data structures, algorithms and software design Extensive experience building large-scale server applications & systems, server applications using SOA principles/Micro Service Prefer experience with Elastic Search Experience with Thrift/Protocol Buffers is a plus Test driven application development Experience Experience in building and managing API end points for multimodal clients. Strong grasp of Backend Architectures and applications. Enthusiasm to learn and contribute in a challenging & fun-filled startup Knack for problem solving and follow efficient coding practices
Desired Skills and Experience Strong platform / infrastructure automation expertise or DevOps experience. Experience in Cloud Platforms – Amazon AWS, Cloudstack, Openstack would be a huge advantage. Working with automation technologies (Puppet, Chef, Ansible) is highly desirable. Experience with Git/Github, Gerrit/ReviewBoard/Phabricator, Jenkins/Hudson and/or other build and continuous integration systems are highly desirable. Experience with scripting languages – Bash, Python, Ruby, Powershell. Ability to build, create, version control and deploy software repositoriesExperience in proposal writing, presales, architecture, designing code is a must. Strong problem solving and debugging skills. Open source contributions in cloud domain will be a huge plus. Experience with virtualization technologies – VMware, Xenserver, Microsoft Hyper-V. Prior experience with server automation products – BMC Bladelogic, Microsoft System Configuration Manager would be a huge plus. Knowledge of ITIL is an added bonus.
Sigmoid is a fast growing Product Based BIG DATA startup. Sequoia Funded & Backed by experienced Professionals & Advisors. Sigmoid is revolutionizing business intelligence and analytics by providing unified tools for historical and real time analysis on Apache Spark. With their suite of products, Sigmoid is democratizing streaming use-cases like RTB Data Analytics, Log Analytics, Fraud Detection, Sensor Data Analytics etc. Sigmoid can enable the customers’ engineering team to set up their infrastructure on Spark and ramp up their development timelines, or enable the analytics team to derive insights from their data. Sigmoid has created a real time exploratory analytics tool using on Apache SPARK which not only vastly improves performance but also reduces the cost. A user can quickly analyse huge volumes of data, filter through multiple dimensions, compare results across time periods and carry out root cause analysis in a matter of seconds. Leading organisations across industry verticals are currently using Sigmoid’s platform in production to create success stories. ------------------------------------ What Sigmoid offers you: Work in a well-funded (Sequoia Capital) Big Data company. Deal with Terabytes of data on a regular basis. Opportunity to contribute to top big data projects. Work on complex problems faced by leading global companies in multiple areas such as fraud detection, real-time analytics, pricing modeling and so on ------------------------------------------------------ We are looking for Someone who has: 6+ years of demonstrable experience designing technological solutions to complex data problems, developing efficient and scalable code. Experience in Design, Architecture, Development of Big Data Technologies. Provides Technical leadership in Big Data space (Apache Spark, Kafka, Flink, Hadoop, MapReduce, HDFS, Hive, HBase, Flume, Sqoop, NoSQL, Cassandra, HBase) Strong understanding of databases and SQL. Defines and Drives best practices in Big Data stack. Drives operational excellence through root cause analysis and continuous improvement for Big Data technologies and processes. Operating knowledge of cloud computing platforms (AWS and/ or Azure or Google Cloud). Mentors/coaches engineers to facilitate their development and provide technical leadership to them A technologist who Loves to code and design and have great problem-solving skills, and the ability & confidence to hack their way out of tight corners. In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners. ------------------------------------ Preferred Qualifications: Engineering Bachelors/Masters in Computer Science/IT. Top Tier Colleges (IIT, NIT, IIIT, etc) will be preferred. Salary is not a constraint for the right talent.
Experience : Minimum of 3 years of relevant development experience Qualification : BS in Computer Science or equivalent Skills Required: • Server side developers with good server side development experience in Java AND/OR Python • Exposure to Data Platforms (Cassandra, Spark, Kafka) will be a plus • Interested in Machine Learning will be a plus • Good to great problem solving and communication skill • Ability to deliver in an extremely fast paced development environment • Ability to handle ambiguity • Should be a good team player Job Responsibilities : • Learn the technology area where you are going to work • Develop bug free, unit tested and well documented code as per requirements • Stringently adhere to delivery timelines • Provide mentoring support to Software Engineer AND/ OR Associate Software Engineers • Any other as specified by the reporting authority
If you want to work in a company that is changing life as you know it then this is the place to be. We are creating Artificial Intelligence(AI) based agents that allow machines, businesses, and customers communicate with each other instantly using the help of AI. We are currently looking for DevOps Engineer to work from Bangalore location. Below is the detailed requirement : Requirement: This candidate should have 2-5 years of experience into: 1. deploying and managing multiple servers 2. hands on experience managing DB technologies like Mongo/Redis/Elastic search 3. containerization, ideally using Docker and Kubernetes 4. working in real time streaming technologies like Kafka, Kinesis etc. 5. experience in Big data technologies like Hadoop, HFS, spark etc is preferred
Full Stack Developer for Integrating Deep Learning applications to web.