Experience : Minimum of 3 years of relevant development experience Qualification : BS in Computer Science or equivalent Skills Required: • Server side developers with good server side development experience in Java AND/OR Python • Exposure to Data Platforms (Cassandra, Spark, Kafka) will be a plus • Interested in Machine Learning will be a plus • Good to great problem solving and communication skill • Ability to deliver in an extremely fast paced development environment • Ability to handle ambiguity • Should be a good team player Job Responsibilities : • Learn the technology area where you are going to work • Develop bug free, unit tested and well documented code as per requirements • Stringently adhere to delivery timelines • Provide mentoring support to Software Engineer AND/ OR Associate Software Engineers • Any other as specified by the reporting authority
Data Scientist - We are looking for a candidate to build great recommendation engines and power an intelligent m.Paani user journey Responsibilities : - Data Mining using methods like associations, correlations, inferences, clustering, graph analysis etc. - Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume - Design and implement machine learning, information extraction, probabilistic matching algorithms and models - Care about designing the full machine learning pipeline. - Extending company's data with 3rd party sources. - Enhancing data collection procedures. - Processing, cleaning and verifying data collected. - Ad hoc analysis of the data and present clear results. - Creating advanced analytics products that provide actionable insights. The Individual : - We are looking for a candidate with the following skills, experience and attributes: Required : - Someone with 2+ years of work experience in machine learning. - Educational qualification relevant to the role. Degree in Statistics, certificate courses in Big Data, Machine Learning etc. - Knowledge of Machine Learning techniques and algorithms. - Knowledge in languages and toolkits like Python, R, Numpy. - Knowledge of data visualization tools like D3,js, ggplot2. - Knowledge of query languages like SQL, Hive, Pig . - Familiar with Big Data architecture and tools like Hadoop, Spark, Map Reduce. - Familiar with NoSQL databases like MongoDB, Cassandra, HBase. - Good applied statistics skills like distributions, statistical testing, regression etc. Compensation & Logistics : This is a full-time opportunity. Compensation will be in line with startup, and will be based on qualifications and experience. The position is based in Mumbai, India, and the candidate must live in Mumbai or be willing to relocate.
he candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts
We are a analytical startup company with 70+ customers base.
It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.
URGENT! My client is looking for a Data Scientist, M. Tech / PHD, from Tier 1 Institutes (such as IIT / IISC), Min. 2 -3 years of experience, with skills in R, Python, Machine Learning. Position is with a very successful product development startup in the field of Artificial Intelligence and Big Data Analytics, based in Gurgaon. Send in your resumes at firstname.lastname@example.org
Woovly, an early stage startup is about awakening interests, hobbies, and bucket lists of an individual. We at Woovly believe that every individual has a passion for some activity and that when pursued and accomplished gives him immense happiness. Woovly connects all such individuals based on their common passions. We are in the final stage of building the online platform that enables the social networking based on common interests.
We’re looking for an experienced Data Engineer to be part of our team who has a strong cloud technology experience to help our big data team to take our products to the next level. This is a hands-on role, you will be required to code and develop the product in addition to your leadership role. You need to have a strong software development background and love to work with cutting edge big data platforms. You are expected to bring with you extensive hands-on experience with Amazon Web Services (Kinesis streams, EMR, Redshift), Spark and other Big Data processing frameworks and technologies as well as advanced knowledge of RDBS and Data Warehousing solutions REQUIREMENTS Strong background working on large scale Data Warehousing and Data processing solutions. Strong Python and Spark programming experience. Strong experience in building big data pipelines. Very strong SQL skills are an absolute must. Good knowledge of OO, functional and procedural programming paradigms. Strong understanding of various design patterns. Strong understanding of data structures and algorithms. Strong experience with Linux operating systems. At least 2+ years of experience working as a software developer or a data-driven environment. Experience working in an agile environment. Lots of passion, motivation and drive to succeed! Highly desirable Understanding of agile principles specifically scrum. Exposure to Google cloud platform services such as BigQuery, compute engine etc. Docker, Puppet, Ansible, etc.. Understanding of digital marketing and digital advertising space would be advantageous.
Position:-Data Scientist Location :- Gurgaon Job description Shopclues is looking for talented Data Scientist passionate about building large scale data processing systems to help manage the ever-growing information needs of our clients. Education : PhD/MS or equivalent in Applied mathematics, statistics, physics, computer science or operations research background. 2+ years experience in a relevant role. Skills · Passion for understanding business problems and trying to address them by leveraging data - characterized by high-volume, high dimensionality from multiple sources · Ability to communicate complex models and analysis in a clear and precise manner · Experience with building predictive statistical, behavioural or other models via supervised and unsupervised machine learning, statistical analysis, and other predictive modeling techniques. · Experience using R, SAS, Matlab or equivalent statistical/data analysis tools. Ability to transfer that knowledge to different tools · Experience with matrices, distributions and probability · Familiarity with at least one scripting language - Python/Ruby · Proficiency with relational databases and SQL Responsibilities · Has worked in a big data environment before alongside a big data engineering team (and data visualization team, data and business analysts) · Translate client's business requirements into a set of analytical models · Perform data analysis (with a representative sample data slice) and build/prototype the model(s) · Provide inputs to the data ingestion/engineering team on input data required by the model, size, format, associations, cleansing required
About the Company Homelii Technologies Pvt. Ltd. is a start-up venture in the field of next generation residential and retail security solutions, promoted by Supreme Audiotronics Pvt. Ltd. and Mr. Jaswinder Matharu. About the Team Jaswinder (Manji) Matharu is a tech serial entrepreneur based in the US who has founded hugely successful companies like Omniglobe International, Agilis International, S3Net, Inc. and Fiducianet, Inc. https://goo.gl/rkiuNA Manmit Chaudhry is the Managing Director at Supreme Audiotronics Pvt. Ltd, a 60-year old company in the field of car infotainment, car accessories & digital surveillance. www.supreme-india.com Product Offering Homelii offers comprehensive, affordable and robust security solutions for the retail and residential sector. We have integrated intrusion detection with several value-added services such as car tracking, CCTV surveillance and insurance coverage, on one single app-based platform, being offered to the consumer through an economical monthly subscription model. Recruitment Standard At Homelii, we like to hire driven people who are passionate about our idea and can thrive in a challenging, fast-paced environment. Qualifications / Skills • Graduate degree in software engineering, computer science, electronics engineering or related disciplines • Working knowledge of API interface, IPv6, IPv4, Wi-Fi, PLC & IoT • Familiar with programming languages like Java & C++ and platforms like J2EE, MySQL, MongoDB & Hadoop ecosystem • Thorough knowledge about big data systems, cloud computing architecture and the open source environment • Familiar with logical programming and its implementation • Should have at least 3 years of relevant work experience Key Roles • Provide technical expertise for Homelii’s IoT solution • Act as the link between the hardware vendors and software development team • Decide the technology stack, database and solution architecture • Understand communication protocols between devices and client-server architecture • Establish protocols for the CMS, CRM and ERP • Prepare and implement processes for internal operations
Skills Required : SQL, Python Data Science Stack (strongly preferred), Machine Learning and Statistics, Data Visualization , A/B Testing, Bandit problems, Recommendation systems, Reinforcement learning Roles & Responsibilities · Ability to work independently to apply technical skills to solve a business objective · Exploratory data analysis employing large data sets involving visualization and basic statistical techniques to develop intuitions and identify key variables in data science projects · Data extraction and joining data from multiple sources; SQL is a must; Python preferred · Strong familiarity and experience with Machine Learning algorithms; The Python Data Science stack is ideal (e.g. sklearn, etc); Regression, Clustering, SVM, Decision Trees, DNNs, Recommendation Algorithms, etc. · Specific experience sought: real-world A/B testing, multi-armed bandit algorithms, causal inference · Ability to read and apply, with guidance, cutting edge Machine Learning research papers to solve business problems
DeepIntent (www.deepintent.com) is a next-generation advertising technology company applying state of the art Artificial Intelligence to improve the way ads are bought and sold globally. As the only DSP offering deeply contextual campaign targeting of individual concepts and their related sentiments, DeepIntent offers advertisers a unique way to discover and dynamically message audiences across both the major exchanges and direct sold inventory. DeepIntent is pioneering a new era of understanding ad performance by user interests. In addition to higher yields, our publishers receive rich performance information on a per-concept, per-sentiment level, all in real-time and beautifully visualized on our UI.
We are a FinTech startup looking to build a full suite of products.
Bachelor’s or Master’s degree in computer science or software engineering; Experience with object-oriented design, coding and testing patterns as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures. Ability to architect highly scalable distributed systems, using different open source tools. Experience building high-performance algorithms. Extensive knowledge of different programming or scripting languages such as Python,Scala Apache Spark Experience with different (NoSQL or RDBMS) databases such as MongoDB Google Big Query , Cassandra, Elastic Search,HBASE,Data Pipelines,IMPALA Experience building data processing systems with Hadoop and Hive using Python. Good Exposure on AWS Lamba, Kingsis, EMR,Redshift, Kafka,
VIA.com is the biggest B2b travel company in Asia. We are looking for someone with good experience in data analysis in online retail domain. SQL/Excel/Google analytics skill is must.
Responsibilities Developing intelligent and scalable engineering solutions from scratch. Working on high/low-level product designs & roadmaps along with a team of ace developers. Building products using bleeding-edge technologies using Ruby on Rails. Building innovative products for customers in Cloud, DevOps, Analytics, AI/ML and lots more.
We are looking for good Java Developers, for our IOT platform. Those who are looking for a Change please drop in your profiles at www.fernlink.com
www.aaknet.co.in/careers/careers-at-aaknet.html You are extra-ordinary, a rock-star, hardly found a place to leverage or challenge your potential, did not spot a sky rocketing opportunity yet? Come play with us – face the challenges we can throw at you, chances are you might be humiliated (positively); do not take it that seriously though! Please be informed, we rate CHARACTER, attitude high if not more than your great skills, experience and sharpness etc. :) Best wishes & regards, Team Aak!
Full Stack Developer for Big Data Practice. Will include everything from architecture to ETL to model building to visualization.
Brief About the Company EdGE Networks Pvt. Ltd. is an innovative HR technology solutions provider focused on helping organizations meet their talent-related challenges. With our expertise in Artificial Intelligence, Semantic Analysis, Data Science, Machine Learning and Predictive Modelling, we enable HR organizations to lead with data and intelligence. Our solutions significantly improve workforce availability, billing, allocation and drive straight bottom line impacts. For more details, please logon to www.edgenetworks.in and www.hirealchemy.com Do apply if you meet most of the following requirements. Very strong in Python, Java or Scala experience, especially in an open source, data-intensive, distributed environments Work experience in Libraries like Scikit-learn, numpy, scipy, cython. Expert in Spark, MapReduce, Pig, Hive, Kafka, Storm, etc. including performance tuning. Implemented complex projects dealing with the considerable data size and with high complexity Good understanding of algorithms, data structure, and performance optimization techniques. Excellent problem solver, analytical thinker, and a quick learner. Search capabilities such as ElasticSearch with experience in MongoDB Nice to have: Must have excellent written and verbal communication skills Have experience writing Spark and/or Map Reduce V2 Be able to translate from requirements and or specifications to code that is relatively bug-free. Write unit and integration tests Knowledge of c++. Knowledge of Theano, Tensorflow, Caffe, Torch etc.
Do apply if any of this sounds familiar! o You have expertise in NLP, Machine Learning, Information Retrieval and Data Mining. o Experience building systems based on machine learning and/or deep learning methods. o You have expertise in Graphical Models like HMM, CRF etc. o Familiar with learning to rank, matrix factorization, recommendation system. o You are familiar with the latest data science trends, tools and packages. o You have strong technical and programming skills. You are familiar with relevant technologies and languages (e.g. Python, Java, Scala etc.) o You have knowledge of Lucene based search-engines like ElasticSearch, Solr, etc and NoSQL DBs like Neo4j and MongoDB. o You are really smart and you have some way of proving it (e.g. you hold a MS/M.Tech or PhD in Computer Science, Machine Learning, Mathematics, Statistics or related field). o There is at least one project on your resume that you are extremely proud to present. o You have at least 4 years’ experience driving projects, tackling roadblocks and navigating solutions/projects through to completion o Execution - ability to manage own time and work effectively with others on projects o Communication - excellent verbal and written communication skills, ability to communicate technical topics to non-technical individuals Good to have: o Experience in a data-driven environment: Leveraging analytics and large amounts of (streaming) data to drive significant business impact. o Knowledge of MapReduce, Hadoop, Spark, etc. o Experience in creating compelling data visualizations
What we do Building India's largest, hyperlocal, mobile-first application and back-end platforms that will serve over 100 million monthly active, Indian local language users, scaling to over 1 billion PageViews a day. Currently powers 5 billion PageViews a month, serving a user-base of 90 million installs, spread across 800 cities in India, who consume services in 15 Indian local languages. What You'll Do: Work in cohesion with the R&D team towards building new products and enriching existing ones on ML/NLP Desired Skills: Prog Languages : Java , Python, R Tools and Frameworks : NLTK, Mahout , GATE , Stanford NLP suite , Weka , Scikit-learn Deep Learning : Understanding of deep learning models applied to NLP. Neural Networks, Word embeddings, sequence learning, RNN NLP : Statistical NLP Models , Pos Tagging , Parsing , Sequence Tagging, Word Sense Disambiguation , Language Models , Topic Modelling , NER ML : Linear regression , Logistic Regression , Naive Bayes , SVM , Decision Trees , Random Forest , Boosting , Bagging , HMM , CRF LSI/LDA , Clustering , UnSupervised / SemiSupervised Methods DS/Algo/Prob : Efficient Data Structures , Object oriented design, Algorithms , Probability & Statistics , Optimization Methods
About Social Frontier: Social Frontier is a comprehensive SaaS offering for automation and optimization of social media marketing channels. We help businesses increase reach & engagement on social media to maximize website traffic or application installs. Social Frontier empowers businesses to monitor and manage all their social media platforms effectively and efficiently. This means simplifying the process of running large and complex ad campaigns for the in-house digital marketer. Social Frontier arms marketing teams with intuitive technology to take control of their digital presence through sophisticated workflow automation and predictive optimization. In other words, Social Frontier makes sure that your posts and campaigns get the attention of people. And not just any people, but potential customers of your business. And not just once, but constantly and consistently. Social Frontier is funded by Growth Story, a Bangalore based incubator which has previously funded companies like Tutorvista, Big Basket, Bluestone, Must See India, Fresh Menu, Portea Medical & Housejoy. About the Role: As a Senior Dev, you would have to partner closely with product management to influence and prioritize roadmaps, drive engineering excellence within the technology team, come up with architectures and designs, work closely with engineers in the team to do review designs and contribute individually to code when required. You would be expected to contribute in the following ways: Translate complex functional and technical requirements into detailed architecture, design and code. Take ownership of your module, maintain, fix bugs and improve code performance. Work with team members to manage the day-to-day development activities, participate in designs, design review, code review, and implementation. Maintain current technical knowledge to support rapidly changing technology, always on a lookout for new technologies and work with the team in bringing in new technologies. Skills Required: Should have strong Computer Science Fundamentals with a minimum BE/BTech degree in Comp Sc from a prestigious institute. Should have worked in a company in the internet domain and should have faced non trivial scaling challenges. Experience and expertise in building full stack systems: front-end, web applications, back-end services and data systems. Experience in Ruby on Rails & in Big Data systems such as MongoDB preferred. Should be willing to learn & understand the domain, which is Digital Marketing. Work Experience: 4-8 years Location: Bangalore (Indiranagar) Tech Stack: Ruby on Rails, MongoDB How to Apply: Send your resume, current & expected CTC, notice period to email@example.com to apply. Founder’s Bio: Sanjay Goel - Sanjay comes with more than 14 years of technical & 8 years of entrepreneurial experience. He handles technology at Social Frontier. Abdulla Basha - Basha is the marketing whizkid of Social Frontier. Currently he is helping generate a traffic of more than 1 billion hits per month, across multiple clients. Anand Rao - With more than 20 years of Enterprise Sales experience, Anand handles sales for Social Frontier.
We are an early stage startup working in the space of analytics, big data, machine learning, data visualization on multiple platforms and SaaS. We have our offices in Palo Alto and WTC, Kharadi, Pune and got some marque names as our customers. We are looking for really good Python programmer who MUST have scientific programming experience (Python, etc.) Hands-on with numpy and the Python scientific stack is a must. Demonstrated ability to track and work with 100s-1000s of files and GB-TB of data. Exposure to ML and Data mining algorithms. Need to be comfortable working in a Unix environment and SQL. You will be required to do following: Using command line tools to perform data conversion and analysis Supporting other team members in retrieving and archiving experimental results Quickly writing scripts to automate routine analysis tasks Creating insightful, simple graphics to represent complex trends Explore/design/invent new tools and design patterns to solve complex big data problems Experience working on a long-term, lab-based project (academic experience acceptable)
Should be able to create AWesome Dashboards - Should have hands on knowledge on all of the following: > Visualizations > Datasets > Reports > Dashboards > Tiles - Excellent Querying Skills using TSQL. - Should have prior exposure to SSRS and/or SSAS - Working knowledge of Microsoft Power Pivot, Power View, and Power BI Desktop.
Who we want • B.E./M.Tech. in Computer Science / Electronics and Communication from top colleges • Minimum 4+ yrs of experience developing server applications • Strong expertise in Core Java and server-side development concepts • Experience with Web Frameworks (JSF, Spring, Wicket etc…) • Experience with REST frameworks (Spring-REST, JERSEY etc…) • Experience in HTTP, REST and API integrations • Experience with XML, JSON document formats and frameworks • Strong OO programming and concurrent programming skills • Strong database programming, SQL skills (MySQL, Oracle) • Experience in developing SOA style components • Strong knowledge of Java application servers (Tomcat, JBoss) • Experience with JMS, Caching desired • Experience with NoSQL technologies desired • Knowledge on Agile Development Methodology(Scrum) is a plus • Should have good unit testing skills. Experience with TDD(Test Driven Development) methodology a plus • Should be able to demonstrate involvement from development to deployment of a product or a big feature Your Role/Responsibilities • Actively collaborate with other team members on design, development, testing and review • Scope and size development tasks/activities • Own the task from design to deployment
SquadRun is a profitable SaaS startup that leverages the best of machines and humans to automate digital operations/ business processes for enterprises. We have offices in Noida and San Francisco. This position is primarily based in Noida. We combine customised workflows, workflow automation and a distributed human talent pool of stay at home moms and college students delivering guaranteed SLAs of high quality output, speed and scale, at great cost efficiency. Please see this overview deck for more context. In every business, there are digital operations/ business processes that need to be executed. We are disrupting the business process outsourcing industry and in the process solving some of the most exciting data science problems. You would need to apply the best machine learning techniques to various aspects of business process automation and help businesses perform operational work with 10X efficiency (cost, speed, quality) than existing alternatives. For example, catalog management for commerce, content moderation for social businesses, training for AI algorithms, customer onboarding for banking and insurance etc. One of the biggest challenge lies in shaping the product which can have flexible modules that can come together to scale most major workflows. We’re looking for a growth hungry early member who would like to apply data science to a real product right from the ground-up. Roles & Responsibilities You would closely work on building a start-of-the-art work automation pipelines for our platform product with the Data Science Lead: Workforce management to acquire, qualify, match, train and verify / Quality Control. Workflow engine to replicate each business process smartly with different configurations. Work automation (SquadAI) to help contractors by automating part of the workflows and ‘Humans ’in the loop machine learning to automate cognitive decisions once we have enough quality training data. Facilitate data driven experiments to derive product insights. Participate in research in artificial intelligence and machine learning applications from time to time. Background & Key traits 1+ years experience working with data intensive problems at scale from inception to business impact. Experience with modeling techniques such as generalized linear models, cluster analysis, random forests, boosting, decision trees, time-series, neural networks, deep learning etc. Experience using programming (Python/R/Scala) and SQL languages in analytical contexts. Experience with distributed machine learning and computing framework would be a plus (Spark, Mahout or equivalent). Why should you consider this seriously? We’re one of the few applications of AI/ data science that actually has a massive market and business model to create a long term valuable business rather than a short term acquisition play! In the last one year, we have built a product and solved problems for some of the largest brands in the world and tested platform at scale (processed 50+ million units of data). Our customers include Uber, Sephora, Teespring, Snapdeal, the Tata and Flipkart Group, amongst others and we have plans to grow 10x in the next 1.5 years. We are a well balanced team of experienced entrepreneurs and are backed by top investors across India and the Silicon Valley. The platform empowers college students, stay at home mothers and grey collared workers as a stable source of income for working on their smartphone (Android App Link). Our contractors earn 3x of what a typical back office BPO employee makes. For us, this is truly impactful. Every day, we see success stories such as this one - how a single mother is sustaining herself. Empowering our contractors to be financially independent is a strong part of our vision! Compensation INR 8-12 L cash + ESOP upto 4L. To Apply Download our app, go through our website, social media pages, linkedin profiles, blogs, etc. Read about our hiring framework here (Must read!). Mail cover letter and resume to firstname.lastname@example.org with the subject “Jr. Data Scientist @ SquadRun Inc”. In your cover letter, tell us why you are a good fit for this role!
SquadRun is a profitable SaaS startup that leverages the best of machines and humans to automate digital operations/ business processes for enterprises. We have offices in Noida and San Francisco. This position is primarily based in Noida with some travel. We combine customised workflows, workflow automation and a distributed human talent pool of stay at home moms and college students delivering guaranteed SLAs of high quality output, speed and scale, at great cost efficiency. In every business, there are digital operations/ business processes that need to be executed. We are disrupting the business process outsourcing industry and in the process solving some of the most exciting data science problems. We’re looking for a senior candidate to own and build our data science and platform product roadmap right from the ground up. You would need to apply the best machine learning techniques to various aspects of business process automation and help businesses perform operational work with 10X efficiency (cost, speed, quality) than existing alternatives. For example, catalog management for commerce, content moderation for social businesses, training for AI algorithms, customer onboarding for banking and insurance etc. One of the biggest challenge lies in shaping the product which can have flexible modules that can come together to scale most major workflows. We are looking for an early member who not only is a data science wizard but has strong product chops. Sample Workflow: Roles & Responsibilities You would closely work on building a start-of-the-art work automation pipelines for our platform product: Workforce management to acquire, qualify, match, train and verify / Quality Control. Workflow engine to replicate each business process smartly with different configurations. Work automation (SquadAI) to help contractors by automating part of the workflows and ‘Humans ’in the loop machine learning to automate cognitive decisions once we have enough quality training data. Architect & Build the work automation product layer right from the scratch by working closely with the platform engineering team. Define the long term data science platform product roadmap with focus on a strong platform layer foundation. Collaborate closely with engineering, business operations & product teams and leverage your expertise in devising appropriate measurements and metrics, designing randomized controlled experiments, architecting business intelligence tooling and tackling hard, open-ended problem, etc. Facilitate data driven experiments to derive product insights. Identify new opportunities to leverage data science to different parts of the our platform. Build a sharp data science team and culture. Background & Key traits 4+ years experience working with data intensive problems at scale from inception to business impact. Experience with modeling techniques such as generalized linear models, cluster analysis, random forests, boosting, decision trees, time-series, neural networks & deep learning. Strong communication and documentation skills. Experience using programming (Python/R/Scala) and SQL languages in analytical contexts. Experience dealing with large datasets. Advanced degree in a relevant field (preferred). Experience with distributed machine learning and computing framework would be a plus (Spark, Mahout or equivalent). Why should you consider this seriously? We’re one of the few applications of AI/ data science that actually has a massive market and business model to create a long term valuable business rather than a short term acquisition play! In the last one year, we have built a product and solved problems for some of the largest brands in the world and tested platform at scale (processed 50+ million units of data). Our customers include Uber, Sephora, Teespring, Snapdeal, the Tata and Flipkart Group, amongst others and we have plans to grow 10x in the next 1.5 years. We are a well balanced team of experienced entrepreneurs and are backed by top investors across India and the Silicon Valley. The platform empowers college students, stay at home mothers and grey collared workers as a stable source of income for working on their smartphone (Android App Link). Our contractors earn 3x of what a typical back office BPO employee makes. For us, this is truly impactful. Every day, we see success stories such as this one - how a single mother is sustaining herself. Empowering our contractors to be financially independent is a strong part of our vision! Compensation INR 20-28L cash + ESOP upto 30L. To Apply Download our app, go through our website, social media pages, linkedin profiles, blogs, etc. Read about our hiring framework here. (Must read!). Mail cover letter and resume to email@example.com with the subject “Data Science & Product Lead @ SquadRun”. In your cover letter, tell us why you are a good fit for this role!
Supply Data Analyst at SquadRun, Inc. We’re looking for an entrepreneurial candidate with extremely strong analytical skills and with high attention to detail to join our team. Your primary efforts will focus around our contractor (supply) base, helping to make it more efficient and more productive by analysing and mapping contractors’ working trends, platform behaviour etc. Your insights will be used to develop product strategy on the supply side, as also to design and deploy productivity tools for our contractor base to use. You will also be responsible for analysis of data related to determining of rewards, driving engagement and retention and leading new growth through insights from internal research and external data. About SquadRun SquadRun helps businesses outsource operational work to a distributed mobile workforce of college students, young professionals, housewives, etc. Tasks include: Data operations: moderating and classifying catalog, tagging content for a consumer/ social app, etc. Outbound calling operations: calling contacts to collect/ update information, lead qualification, feedback surveys etc. Roles and Responsibilities Delineate, plan and update metrics that indicate holistic platform health and growth Identify and map key data points that will help increase contractor efficiency and productivity Identify and map key data points that will help promote ease of adoption and onboarding Drive increased engagement on the platform through predictive models that forecast ‘user journey and platform behaviour’ Analyze data to determine ‘mission rewards’ framework Create data maps / systems that help to match capabilities of a contractor to the nature of work (output) expected Derive insights based on internal metrics and external research, for identifying new segments for growth Share inputs for design of workflows suited to extract optimum performance from supply base Qualifications At least 1-2 years experience of working in a fast-paced environment Bachelor’s degree in Engineering, Business Administration, Management Ability to work with large amounts of data: facts, figures, and number crunching. You will need to see through the data and analyze it to find conclusions. Data analysts are often called to present their findings, or translate the data into an understandable document. You will need to write and speak clearly, easily communicating complex ideas. You must look at the numbers, trends, and data and come to new conclusions based on the findings. Attention to Detail: Data is precise. You have to make sure they are vigilant in their analysis to come to correct conclusions. Why should you consider this seriously? We’re one of the few applications of AI/ data science that actually has a massive market and business model to create a long term valuable business rather than a short term acquisition play! In the last one year, we have built a product and solved problems for some of the largest brands in the world and tested platform at scale (processed 50+ million units of data). Our customers include Uber, Sephora, Teespring, Snapdeal, the Tata and Flipkart Group, amongst others and we have plans to grow 10x in the next 1.5 years We are a well balanced team of experienced entrepreneurs and are backed by top investors across India and the Silicon Valley The platform empowers college students, stay at home mothers and grey collared workers as a stable source of income for working on their smartphone (Android App Link). Our contractors earn 3x of what a typical back office BPO employee makes. For us, this is truly impactful. Every day, we see success stories such as this one - how a single mother is sustaining herself. Empowering our contractors to be financially independent is a strong part of our vision! To Apply: Download our app, play through it to get a feel on what we are all about. Go through our deck, website, social media pages, linkedin profiles, player blog, business blog etc Read about what we’re building and our hiring framework here Mail cover letter and resume to firstname.lastname@example.org with the subject “Supply Data Analyst at SquadRun” In your cover letter, tell us why you are perfect for this role. Include links to your past projects. If you write a blog, contribute to an open source project, or tried solving an interesting problem, we want to hear about it
bounty app team is seeking a full-time devops engineer well versed in Cloud/Linux based server administration. Applications accepted with candidates having minimum 2 years of relevant experience. Skills: Linux, Apache, NGINX, AWS and other Cloud Hosting, MySQL Server, Cassandra, ElasticSearch, RabbitMQ, some programming/scripting knowledge in python, java, php etc. Responsibilities: - Provide administration functions for Linux based servers hosted on cloud platform - Monitor systems for performance, health-check, utilization and security - Write scripts to setup some of the tasks as cron jobs - Apache and Nginx web server configuration and monitoring - MySQL database server administration - Cassandra and ElasticSearch administration - Perform maintenance tasks such as db backup and restore - Write process documentation/checklists to follow - Maintain and audit servers/services - Track vulnerabilities and apply appropriate patches and upgrades - Be on-call and respond quickly to system maintenance needs Eligibility Criteria: - 2 to 4 years (minimum 2 years) of relevant experience - Fluency in administering web servers and experience in IT Infrastructure - Strong knowledge of NGINX and Java Application Servers - Strong knowledge of Amazon Web Services - Strong knowledge of MySQL database administration - Well versed in scheduling and monitoring cron jobs - Working knowledge of Java Programming, Python/Shell Script Writing - Working knowledge of DNS, TCP/IP, DHCP - Familiar with MySQL queries and is capable of maintenance tasks such as db backup and restore - Ability to work in a fast-paced, start-up environment and perform well under tight schedule Please note this is not a 9-5 job. This might require working at off hours. Late night or early morning during deployments. About the Company: bounty app is the product of Nanolocal Technologies Private Limited. With bounty app, you get rewarded every time you walk into bounty partner places. All you need to do is to open the app and just register. You DO NOT have to remember to check-in when you are at specific partner place also. Whenever you are in our partner place, the app automatically recognizes that and pops up to alert you that you are in a bounty rewards zone and all you need to do is tap once. This intelligent assist is based on a multitude of factors and works even if you are NOT connected to the internet - Yes, even if you are not connected to the internet you can earn rewards. It is context-aware, hyper location-aware and will get personalized over time. No posting on FB/twitter when you check-in and you have an absolute privacy. You check-in to earn reward points that are redeemable against a host of egift cards with no tricky conditions! The successful candidate will be a major contributor to the development of bounty app Platform, a very innovative concept. Check out in more detail at www.bountyapp.in
Scienaptic (www.scienaptic.com) is a new age technology and analytics company based in NY and Bangalore. Our mission is to infuse robust decision science into organizations. Our mantra to achieve our mission is to - reduce friction- among technology, processes and humans. We believe that good design thinking needs to permeate all aspects of our activities so that our customers get the best possible aesthetic and least frictious experience of our software and services. As a Prinicipal Software development Engineer you will be responsible for the development and augmentation of the software components which will be used to solve the analytics problems of large enterprises. These components are highly scalable, connect with multiple data sources and implement some of the complex algorithms We are funded by very senior and eminent business leaders in India and US. Our lead investor is Pramod Bhasin, who is known as a pioneer of ITES revolution. We have the working environment of a new age, cool startup. We are firm believers that the best talent grounds will be non-hierarchical in structure and spirit. We expect you to enjoy, thrive and empower others by progressing that culture. Requirements : - Candidate should have all round experience in developing and delivering large-scale business applications in scale-up systems as well as scale-out distributed systems. - Identify the appropriate software technology / tools based on the requirements and design elements contained in a system specification - Should implement complex algorithms in a scalable fashion. - Work closely with product and Analytic managers, user interaction designers, and other software engineers to develop new product offerings and improve existing ones. Qualifications/Experience : - Bachelor's or Master's degree in computer science or related field - 10 to 12 years of experience in core Java programming: JDK 1.7/JDK 1.8 and Familiarity with Big data systems like Hadoop and Spark is an added bonus - Familiarity with dependency injection, Concurrency, Guice/Spring - Familiarity with JDBC API / Databases like MySQL, Oracle, Hadoop - Knowledge of graph databases and traversal - Knowlede of SOLR/ElasticSearch, Cloud based deployment would be preferred
123Stores is on the lookout for the kind of Software Engineer who understands the ways in which technology and the emergence of data are transforming commerce, and who’s just waiting to be given an opportunity to build the kind of ERP Solution, which further consolidates 123Stores’ position as a global, and leading multi-channel online retailer. Our web-based Proprietary ERP Solution allows us to overcome complex logistical challenges, delighting shoppers every step of the way in their transactions with us. An ideal applicant for this position is one who is passionate about technology, thinks in scale, is willing to embrace the new, the unknown, the unfamiliar, and is relentlessly committed to building the kind of robust ecosystem, required to fulfill millions of orders annually! Our engineers work alongside leaders in the E-commerce industry, to optimize various facets like Inventory Management, Logistics, Order Fulfillment, Customer Service Systems, Catalog Management, Finance, so on and so forth, to help us deliver on our promise of providing shoppers with a ‘WOW’ experience.
http://connecttosunil.esy.es please go this website you will know everything about me
Couture.ai provides Artificial Intelligence as SaaS offering for global online retailers and fashion brands. Our prediction technology helps retailers tailor experiences for their customer, even without any prior interactions with them. We use our state of art DeepLearning technology to predict the behavior of newly acquired users, which helps retailers tailor experiences for each customer even without any prior interactions with them. After integrating our sdk with initial clients, we have seen product view increased by 25% and sales conversion went up to 3 times. Credible display of innovation in past projects (or Academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with Machine Learning libs, hands-on with RDBMS/NoSQL DBs, Big Data Analytics and forming Insights based on data, handling Unix & Production Server etc. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) is must. exceptionally bright engineers are welcome. Let us know, if this interests you to explore the profile further.
- Predict upcoming fashion trends from market intelligence, - Implicit identification of personal style from user's images, social media & interactions, - Personalization for user purchase points and size, - Virtual styling, and - Intelligent screenless interfaces Couture.ai provides Artificial Intelligence as SaaS offering for global online retailers and fashion brands. Our prediction technology helps retailers tailor experiences for their customer, even without any prior interactions with them. We use our state of art DeepLearning technology to predict the behavior of newly acquired users, which helps retailers tailor experiences for each customer even without any prior interactions with them. After integrating our sdk with initial clients, we have seen product view increased by 25% and sales conversion went up to 3 times. We are looking for a research-driven expert in AI & Data Science team, to take our innovations to next horizon. Credible display of innovation in past projects (or Academia) and expertise with Machine Learning is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with Machine Learning libs - TensorFlow, MLlib, SkiPy, NumPy, etc., experience with key DeepLearning algorithms etc. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) is must. Let us know if this interests you to explore the profile further.
To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.
Job Role Develop and refine algorithms for machine learning from large datasets. Write offline as well as efficient runtime programs for meaning extraction and real-time response systems. Develop and improve Ad-Targeting based on various criteria like demographics, location, user-interests and many more. Design and develop techniques for handling real-time budget and campaign updates. Be open to learning new technologies. Collaborate with team members in building products Skills Required MS/PhD in Computer Science or other highly quantitative field Minimum 8 - 10 yrs of hands on experience in different machine-learning techniques Strong expertise in Big-data processing (Combination of the technologies you should be familiar with Kafka, Storm, Logstash, ElasticSearch, Hadoop, Spark) Strong coding skills in at-least one object-oriented programming language (e.g. Java, Python) Strong problem solving and analytical ability Prior 3+ year experience in advertising technology is preferred
Transporter.city is a platform as a service for logistics that uses AI powered deep learning algorithms to connect shippers and movers and help each manage business
Essential Responsibilities In this role, you will: • Involved in development of communications material for all Data Lake Platform Evangelization initiatives. • Work with senior management, architects, engineers and other internal and external members to align on the execution on initiatives with the vision and strategy • Provide leadership to the offshore team and manage the deliverables • Participating in working groups, strategy sessions, architecture discussions, interviews with technical and business leaders to plan, coordinate, create and execute original communications products and publications for customers. • Participate and provide communications support for meet-ups, forums etc. • Define, manage and report communications metrics and dashboards. • Ability to understand and translate GE vision, strategy and complex technical concepts and ideas into language and graphics (with real-life examples) easily understood by team member • Strong interpersonal skills to facilitate working with a wide range of individuals and groups from technical, business and other diverse background. • Understand and keep up-to-date on emerging big data, cloud and related industry trends in addition to GE's portfolio. Qualifications/Requirements • Bachelor’s degree in technology or equivalent discipline, 10 + years of Industry experience. • 5+ years of experience in Information Technology field Desired Characteristics • Any offer of employment is conditioned upon the successful completion of a background investigation • Must be willing to travel • Must be willing to work out of an office located in Bangalore, India • 2 or more years of communications/ marketing / technical sales experience in Information technology field with relevant experience in Big Data and Cloud Computing • Ability to precisely convey key concepts aligned with GE’s vision, strategy and roadmap. Working knowledge, experience or good understanding of the Big Data and Cloud ecosystem including, but not limited to, the following: o Cloud components (IaaS, PaaS, SaaS) o Application and business process development, management and consumption on Cloud o Major Cloud Service providers (AWS, Azure, Google) and their offering o Cloud storage models (Block, file, object) o Different Cloud models (Private, Public, Hybrid) o Cloud based solutions including open source (i.e. OpenStack) • Strong management skills • Interest and passion in emerging technologies around Big Data and Cloud. • Exceptional written and oral communication skills with ability to independently create content for technical and business audiences. • Strong in critical thinking, writing, publishing for a diverse set of global audiences and proficient in different methods of communication. Proactive and creative. • Recognizes patterns and complexity in problems. Extracts decomposition algorithms, and strategically plans how to execute programs by understanding how best to decompose to expose / protect against risk
Why - You are interested in creating impact for over 150 Million users registered with us. It's that simple! Where - Kickass Studio Office, Bangalore India. What - You will be responsible in creating a world class recommendation engine for our brand new product! You will partner with various teams to create enormous impact through the use of the latest analytical tools and techniques. At Meaww, high focus on impact and ownership allows for freedom to each to experiment and innovate. The ability to see what your contribution does to the business is a rare experience, add to that the fact that the impact of it is felt by your friends and family
We are looking for people with experience in Machine Learning. We work on DNNs and if you are deeply interested, can spend time in training as well (given you are very well versed on other ML concepts). Work is exciting and we have an awesome team on board already!
Who and what of Razorpay Razorpay is a year old gathering of folks from various walks of life. We have suave designers, magical engineers, sales hustlers, calm-as-rock customer champions and then some. Together we are trying to bring modern technology into the backbone of the internet and that's online payments. We believe that someone is doing his/her job all wrong if the customers have to tear their hair over accepting payments online. In one year we have gone from nothing to a full fledged, modern and robust online payments system. Our customers swear by our tech and UX, and we are ecstatic to see people enjoy the results of our fanatic focus on making online payments simple and accessible. We provide the smoothest payment experience that money can buy right now in India. With a marquee list of investors, cash in bank and a low burn rate, we are here for years to stay, to lead and demonstrate how online payments should be. And you are? Someone who thinks technology is the answer to life, universe and everything. You don't want to hide behind the mundanity of a corporate job, a cog in the bigger scheme that's unknown. You want to own what you do and drive it to completion, with a touch of perfection befitting your zeal for your work. You swear by version control and you bow to the church of test driven development. You love your tech and automation is your religion. Now that you know us and we know you, let's talk about the culture you can expect at Razorpay. All things culture We treat all our employees as adults, even if they are legally not one. We think that you can do your best with a freedom to experiment, sense of ownership of your work, licence to fail and when you have peers who care just as much as you about their art. We default to transparency because everyone can contribute to anything, and it's only possible if you get to know what's going on. We prefer to mingle with the community through meetups and conferences on topics as broad as daylight. And at the end of the day we accept that we are humans, fallible to error, learning through errors together. After all, there's comedy in error too. We believe in having the best tools for the job. We are happy paying customers of GitHub, Clearbit, Zendesk and Slack, some of the most important tools in our workflow. Our people select the laptops on which they think they can do their best. We also provide free books for constant learning, free VMs for side-projects, and an in-office gym to keep you at your best, always. Hey developer, what will you get to work on? As a data scientist, you will get to: Work closely with our business & product teams to identify key questions and come up with data solutions Apply statistical and econometric models on large data sets to identify impacts, predict future performance, measure results, etc. Set up our data sciences and machine learning team Design, analyse and interpret the results of experiments Work on exciting challenges to improve payments flow What do we expect from you? As a data scientist we expect you to have: 2+ years of experience working with and analysing large data sets to solve problems Understand & tackle problems around data quality, clustering, dimensionality, etc Programming - data crunching languages like R or Python would be a plus Prior experience with data-distributed tools like Scalding, Hadoop, etc A willingness to learn new technology, whatever lets you deliver the best product Apart from these, we also expect the following, but we accept that you can be an absolutely great developer without fulfilling the below. So go ahead and apply even if the following aren’t applicable: Strong knowledge of statistics & experimental design Have a few weekend side-projects up on GitHub Have contributed to an open source project Have working knowledge of multiple languages
Check our JD: https://www.zeotap.com/job/senior-tech-lead-m-f-for-zeotap/oEQK2fw0
JSM is a data sciences company, founded in 2009 with the purpose of helping clients make data driven decisions. JSM specifically focuses on unstructured data. It is estimated that 90% of all data generated is unstructured and still mostly under-utilized for actionable insights largely due to high costs involved in speedily mining these large volumes of data. JSM is committed to creating cost effective innovative solutions in pursuit of highly actionable, easy to consume insights with a clearly defined ROI
- Passion to build analytics & personalisation platform at scale - 4 to 9 years of software engineering experience with product based company in data analytics/big data domain - Passion for the Designing and development from the scratch. - Expert level Java programming and experience leading full lifecycle of application Dev. - Exp in Analytics, Hadoop, Pig, Hive, Mapreduce, ElasticSearch, MongoDB is an additional advantage - Strong communication skills, verbal and written
We are a team with a mission, A mission to create and deliver great learning experiences to engineering students through various workshops and courses. If you are an industry professional and :- See great scope of improvement in higher technical education across the country and connect with our purpose of impacting it for good. Keen on sharing your technical expertise to enhance the practical learning of students. You are innovative in your ways of creating content and delivering them. You don’t mind earning few extra bucks while doing this in your free time. Buzz us at email@example.com and let us discuss how together we can take technological education in the country to new heights.
Zebi Data India Pvt. Ltd. is a technology company in area of Big Data & Cloud located in Hyderabad(Gachibowli) and Visakhapatnam(IT Hub). Zebi provides Data and Analytics as a Service to Indian Businesses/Governments of all sizes and Zebi is set to leapfrog Indian Organizations into Next Generation technologies. Zebi’s Big Data Platform drives up efficiencies for businesses, governments, and enhances quality of life for all Indians. • San Francisco Bay Area Angels PreSeries A US$1 Million raised in Dec 2015 • Projected valuation of US$400 Million by 2020, will create more than 1000 high paying jobs • Founded by 7 IIT graduates each with 2 decades of global experience
Fly NavaTechnologies is a start-up organization whose vision is to create the finest airline software for distinct competitive advantages in revenue generation and cost management. The software products have been designed and created by veterans of the airline and airline IT industry, to meet the needs of this special customer segment. The software will be constructive by innovative approach to age old practices of pricing, hedging, aircraft induction and will be path breaking to use, encouraging the users to rely and depend on its capabilities. Wewill leverage our competitive edge by incorporating new technology, big data models, operations research and predictive analytics into software products, a means of creating interest and creativity while using the software. This interest and creativity will increase potential revenues or reduce costs considerably, thereby creating a distinct competitive differentiation. FlyNava is convinced that when airline users create that differentiation easily, their alignment to the products will be self-motivated rather than mandated. High level of competitive advantage will also flow with the following All the products, solutions and services will be Copyright. FlyNava will benefit with high IPR value including its base thesis/research as the sole owners. Existing product companies are investing in other core areas which our business areas are predominantly manual process Solutions are based on master thesis which need 2-3 years to complete and more time to make them relevant for software development. Expertise in these areas are far and few. Responsible for Collecting, Cataloguing, Filtering of data and Benchmarking solutions - Contribute to model related Data Analytics and Reporting. - Contribute to Secured Software Release activities. Education & Experience : - B.E/B.Tech or M.Tech/MCA in Computer Science/ Information Science / Electronics & Communication - 3 - 6 Years of experience Must Have : - Strong in Data Analytics via Pyomo (for optimization) Scikit-learn (for small data ML algorithms) MLlib (Apache Spark big-data ML algorithms) - Strong in representing metrics and reports via json - Strong in scripting with Python - Familiar with Machine learning, pattern recognition Algorithms - Familiar with Software Development Life Cycle - Effective interpersonal skills Good to have : Social analytics Big data
Nextalytics is an offshore research, development and consulting company based in India that focuses on high quality and cost effective software development and data science solutions. At Nextalytics, we have developed a culture that encourages employees to be creative, innovative, and playful. We reward intelligence, dedication and out-of-the-box thinking; if you have these, Nextalytics will be the perfect launch pad for your dreams. Nextalytics is looking for smart, driven and energetic new team members.
Develop analytic tools, working on BigData and Distributed systems. - Provide technical leadership on developing our core Analytic platform - Lead development efforts on product features using Scala/Java -Demonstrable excellence in innovation, problem solving, analytical skills, data structures and design patterns - Expert in building applications using Spark and Spark Streaming -Exposure to NoSQL: HBase/Cassandra, Hive and Pig -Latin, Mahout -Extensive experience with Hadoop and Machine learning algorithms
Startup environment, Fast Paced, Exposure to Domain Experts, Great Mission.
We're busy solving some of the hardest problems while having great fun doing it. Come join us if you want to be part of a young and dynamic team who is working on bleeding edge tech.