InfoVision Labs is a global IT Services and Solutions company having expertise in Mobile App Solutions, Digital Solutions, Web and Responsive Web Solutions. InfoVIsion Labs is a Software Development Company in Pune, IT Software Services in Pune, Maharashtra, India
Experience : Minimum of 3 years of relevant development experience Qualification : BS in Computer Science or equivalent Skills Required: • Server side developers with good server side development experience in Java AND/OR Python • Exposure to Data Platforms (Cassandra, Spark, Kafka) will be a plus • Interested in Machine Learning will be a plus • Good to great problem solving and communication skill • Ability to deliver in an extremely fast paced development environment • Ability to handle ambiguity • Should be a good team player Job Responsibilities : • Learn the technology area where you are going to work • Develop bug free, unit tested and well documented code as per requirements • Stringently adhere to delivery timelines • Provide mentoring support to Software Engineer AND/ OR Associate Software Engineers • Any other as specified by the reporting authority
Data Scientist - We are looking for a candidate to build great recommendation engines and power an intelligent m.Paani user journey Responsibilities : - Data Mining using methods like associations, correlations, inferences, clustering, graph analysis etc. - Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume - Design and implement machine learning, information extraction, probabilistic matching algorithms and models - Care about designing the full machine learning pipeline. - Extending company's data with 3rd party sources. - Enhancing data collection procedures. - Processing, cleaning and verifying data collected. - Ad hoc analysis of the data and present clear results. - Creating advanced analytics products that provide actionable insights. The Individual : - We are looking for a candidate with the following skills, experience and attributes: Required : - Someone with 2+ years of work experience in machine learning. - Educational qualification relevant to the role. Degree in Statistics, certificate courses in Big Data, Machine Learning etc. - Knowledge of Machine Learning techniques and algorithms. - Knowledge in languages and toolkits like Python, R, Numpy. - Knowledge of data visualization tools like D3,js, ggplot2. - Knowledge of query languages like SQL, Hive, Pig . - Familiar with Big Data architecture and tools like Hadoop, Spark, Map Reduce. - Familiar with NoSQL databases like MongoDB, Cassandra, HBase. - Good applied statistics skills like distributions, statistical testing, regression etc. Compensation & Logistics : This is a full-time opportunity. Compensation will be in line with startup, and will be based on qualifications and experience. The position is based in Mumbai, India, and the candidate must live in Mumbai or be willing to relocate.
he candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts
It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.
URGENT! My client is looking for a Data Scientist, M. Tech / PHD, from Tier 1 Institutes (such as IIT / IISC), Min. 2 -3 years of experience, with skills in R, Python, Machine Learning. Position is with a very successful product development startup in the field of Artificial Intelligence and Big Data Analytics, based in Gurgaon. Send in your resumes at email@example.com
Woovly, an early stage startup is about awakening interests, hobbies, and bucket lists of an individual. We at Woovly believe that every individual has a passion for some activity and that when pursued and accomplished gives him immense happiness. Woovly connects all such individuals based on their common passions. We are in the final stage of building the online platform that enables the social networking based on common interests.
We’re looking for an experienced Data Engineer to be part of our team who has a strong cloud technology experience to help our big data team to take our products to the next level. This is a hands-on role, you will be required to code and develop the product in addition to your leadership role. You need to have a strong software development background and love to work with cutting edge big data platforms. You are expected to bring with you extensive hands-on experience with Amazon Web Services (Kinesis streams, EMR, Redshift), Spark and other Big Data processing frameworks and technologies as well as advanced knowledge of RDBS and Data Warehousing solutions REQUIREMENTS Strong background working on large scale Data Warehousing and Data processing solutions. Strong Python and Spark programming experience. Strong experience in building big data pipelines. Very strong SQL skills are an absolute must. Good knowledge of OO, functional and procedural programming paradigms. Strong understanding of various design patterns. Strong understanding of data structures and algorithms. Strong experience with Linux operating systems. At least 2+ years of experience working as a software developer or a data-driven environment. Experience working in an agile environment. Lots of passion, motivation and drive to succeed! Highly desirable Understanding of agile principles specifically scrum. Exposure to Google cloud platform services such as BigQuery, compute engine etc. Docker, Puppet, Ansible, etc.. Understanding of digital marketing and digital advertising space would be advantageous.
Position:-Data Scientist Location :- Gurgaon Job description Shopclues is looking for talented Data Scientist passionate about building large scale data processing systems to help manage the ever-growing information needs of our clients. Education : PhD/MS or equivalent in Applied mathematics, statistics, physics, computer science or operations research background. 2+ years experience in a relevant role. Skills · Passion for understanding business problems and trying to address them by leveraging data - characterized by high-volume, high dimensionality from multiple sources · Ability to communicate complex models and analysis in a clear and precise manner · Experience with building predictive statistical, behavioural or other models via supervised and unsupervised machine learning, statistical analysis, and other predictive modeling techniques. · Experience using R, SAS, Matlab or equivalent statistical/data analysis tools. Ability to transfer that knowledge to different tools · Experience with matrices, distributions and probability · Familiarity with at least one scripting language - Python/Ruby · Proficiency with relational databases and SQL Responsibilities · Has worked in a big data environment before alongside a big data engineering team (and data visualization team, data and business analysts) · Translate client's business requirements into a set of analytical models · Perform data analysis (with a representative sample data slice) and build/prototype the model(s) · Provide inputs to the data ingestion/engineering team on input data required by the model, size, format, associations, cleansing required
Zilingo is a fashion and lifestyle marketplace backed by Sequoia Capital, Venturra Capital and other international investors. Zilingo has taken on the complex challenge of aggregating long-tail sellers in Thailand, Singapore and Indonesia, and giving them a chance to grow their business across the South-East Asian region. Customers love the dynamic, and intuitive experience that Zilingo brings them across web, android and iOS. Job Description Zilingo is an exciting, Sequoia-backed marketplace connecting sellers and consumers in South East Asia to each other, across national, language, and currency boundaries. Zilingo's wholly in-house platform is key to enabling small sellers to sell unique products to consumers across the region. Zilingo's backend platform is a network of microservices written in Scala on top of the Play framework and Akka. As a backend developer at Zilingo, you'll get to develop tons of our core libraries which help us develop better-scaling services faster, work on difficult problems, including recommendations, search, and fulfillment, and DevOps. We have a passionate engineering culture that encourages solving difficult problems and engaging closely with unique challenges to add value to hundreds of thousands of customers and sellers. We encourage ownership and innovation and love to work as a team.
ABOUT US Plivo is amongst the leading Service Providers in the CPaaS market, which is estimated to grow to a whopping 8 billion dollars by 2019. Plivo started in 2011 and has been backed by investors as Andreessen Horowitz who are also the early stages investors in the companies such as Facebook, Google and AirBnB. Plivo is also part of YCombinator a most sought after incubation in the Valley and is now profitable as well. Plivo has a team of about 75 members spread between its US & India offices. Plivo has 1000’s of businesses from around the globe who trust us with their Voice and Messaging needs including helping them manage their Customer interactions. We are looking for someone who is excited to grow with us and be part of a company that is disrupting a multibillion dollar telecom space. We are dedicated to simplify and disrupt the multi billion telecom market. Our cloud-powered Voice and SMS APIs allow businesses to build communication applications that are scalable, low cost, and global. Thousands of well known businesses are already built using Plivo including popular conferencing solutions, mobile communication apps, SMS marketing software, and business phone solutions and this is just the beginning. We are looking for a talented and a driven product engineer to join our team. WHAT STACK WE USE Golang, Django, Python, Flask, Redis, Postgres, Celery, Nginx, Kamailio, FreeSWITCH, SIP, React, WebRTC, Linux, Android, iOS. WHAT TECHNOLOGIES WE WORK UPON Networking, Distributed systems, Big Data, Least Cost Routing, Billing, Invoicing, Analytics, Fraud detection & Prevention, VOIP protocols, SMS Protocols, Cloud Infra, Web and Mobile Platforms, Microservices ROLES & RESPONSIBILITIES Own, design and implement cloud-based solutions that are used globally. Architect solutions for High Availability, Low Latency and High Throughput for a rapidly growing business. Evaluate technology stacks for API platforms that scale to more than 100,000 transactions per second. Be acknowledged as the standard setter and bar raiser for coding principles and standards. Work with open source community to improve existing libraries. Develop reusable tools / libraries. Mentor junior developers technically Be the go-to person from a technology perspective both internally and external to the team. SKILLS REQUIRED Proficient in at-least one OO language Writing high-performance, reliable and maintainable code. Ability to define core contracts and bring them to closure through collaboration. Good knowledge of database structures, theories, principles, and practices Very good analytical and problem solving skills Experience in Telecom domain is a plus Demonstrated potential in the points under "Roles & Responsibilities" JOB PERKS Informal work style, startup culture with flexible work hours Endless snacks and beverages Free gym membership Competitive salary and medical benefits.