"Crediwatch is a leading fintech organization working with leading national and international financial institutions helping them in the space of analytics, credit appraisal, risk profiling, lead generation, early warning systems. We are looking to expand our data science team who will work on cutting edge platforms and help enhance Crediwatch's proprietary deep learning models across formats for text, video, images, audio. Looking for candidates who have a hands on understanding of the various industry standard ML platforms and tools to build models at scale."
"About Us:\nZeMoSo Technologies provides product & data engineering solutions using open source Big Data stacks, Machine Learning and advanced custom visualizations. \nIn the recent past, we were featured as one of Deloitte Fastest 50 growing tech companies from India.\nOur founders have had past successes - founded a decision management company acquired by SAP AG (now part of Hana Big data stack & Netweaver BPM), early engineering team of Zoho (leading billion $ SaaS player) & some Private Equity experience. \nMarquee customers along with some exciting startups are part of our clientele. \nPosition: Server Side Engineers\nExperience: 3-5 Yrs.\nLocation: Hyderabad\nQualification : BTECH/BE in CSE or equivalent\nSkills Required:\n•\tServer side developers with good server side development experience in Java AND/OR Python\n•\tExposure to Data Platforms (Cassandra, Spark, Kafka) will be a plus\n•\tExposure to Machine Learning (Tensorflow) will be a plus\n•\tGood to great problem solving and communication skill\n•\tAbility to deliver in an extremely fast paced development environment\n•\tAbility to handle ambiguity\n•\tShould be a good team player\nJob Responsibilities :\n•\tLearn the technology area where you are going to work\n•\tDevelop bug free, unit tested and well documented code as per requirements\n•\tStringently adhere to delivery timelines\n•\tProvide mentoring support to Software Engineers AND/ OR Associate Software Engineers\n•\tAny other as specified by the reporting authority"
"Job Description:\nIf you have good programming skills, and want to solve complex real world problems using artificial intelligence, machine learning and computer vision while learning these on the job, READ ON.\nRun by IIT Kanpur alumni, AIMonk is a computer vision startup in stealth mode. We are building a uniquitous platform for computer vision using Artificial intelligence. \nWe are looking for an entry level(0-2 years of professional experience) programmer with deep interest in software engineering. This is a machine learning engineer position but there is no machine learning experience required. What we are looking for is sharp and curious brain who gets his/her high via solving problems.\nWilling to work in an early stage start-up, humility and is another skill-set required.\nCollege, pedigree doesn't matter but it is a good indicator of your skill-level. People who went to NIT, BITS are encouraged to apply. However, there is a programming challenge below. If you have the skills to solve that, it doesn't matter where you went to the college or what degree do you have.\n Be careful, if you work with us once, ordinary jobs will not interest you any more as they won't be challenging enough. Good thing, you will learn more than what you need to land those top 0.01% interesting jobs.\n\n\nJob Perks:\nOpportunity to work with the smartest people in the country on Artificial Intelligence and computer vision.\nLearning, tons of it.\nAutonomy, respect and freedom to set your own work-hours, opportunity to fail and learn.\nAnd of course! free beer and pizza once in a while.\n\nProblem statement: \nhttps://s3-ap-southeast-1.amazonaws.com/aimonk/SDE1-problem+statement.pdf"
"AlgonoX Technologies is hiring Data Science Engineers from Premier Institutes to work on some cutting edge technologies!\n\nThis is an exciting opportunity with a creative organization in AI/ML space having great work culture!\n\nJob role : Engineer\n\nWork Location : Bangalore/Hyderabad/Chennai depending on requirement.\n\nEducational qualification : Graduates from Premier institutes (IITs/NITs/IIITs/BITS)\n\nExperience : 1 + years AND 2017 passed outs.\n\nWe are keen on hiring someone whose skill set matches the following.\n\n1) Proven experience in Python programming with ability to write clean, well-documented and tested code.\n2) Exposure to ML Libraries.\n3) Knowledge on Deep learning frameworks ( Keras , PyTorch , Theano etc).\n4) Prior experience on building chat bots would be added advantage.\n5) Experience with manipulating large data sets (ETL).\n6) Willing to research, ideate and implement practical solutions.\n7) Passion coupled with knowledge on existing state of the art ML Models (Nice to have).\n8) Effective communication and interpersonal skills.\n\nProficiency/Experience/Exposure to the following :\nPython,Java, Node.js,TensorFlow,scikit ,Keras,Apache Solr,Selenium WebDriver\n\nInterested and those meeting the aforesaid may share their resume to email@example.com"
"About ThreatLandscape\n\nThreatLandscape is building the next-gen threat intelligence, attack surface minimalization, and remediation platform for cyber security teams at enterprises and governments. With a mission to create a global brand, ThreatLandscape’s currently looking for passionate, like-minded folks in the areas of Data Engineering, Scientific Computing, and Machine Learning to join us as we move to change our industry’s landscape and become the go-to name for all intelligent cyber security solutions.\n\n\nWhat they will do\n\n• Will work with our Data Science team in evolving our threat intelligence platform\n\n• Build scalable data computing workflows\n\n• Exploit the latest in Deep Learning and Natural Language Processing technologies\n\n\nWho we need\n\n• Bachelor’s degree in Computer Science or related discipline\n\n• Over three years' experience programming in Python with a solid understanding of Python internals\n\n• Two years' experience with Python’s scientific computing and data analysis packages like Numpy, Scipy and Pandas\n\n• Over a year's experience building analytics platforms using Python\n\n• Experience in modeling real-world problems with machine learning for time series data\n\n• Experience with relational databases\n\n• Experience with Big Data architecture and frameworks like Hadoop, Spark or equivalent\n\n• Familiarity with Deep Learning frameworks like Tensorflow and Keras is a plus\n\n• Exposure to open source and cloud-specific data pipeline tools such as Airflow, Glue and Apache Beam"
"Looking for senior data science researchers.\n\nBasic Qualifications:\n∙Bachelors in Computer Science/Mathematics + Research (Machine Learning, Deep Learning, Statistics, Data Mining, Game Theory or core mathematical areas) from Tier1 tech institutes.\n∙3+ years of relevant experience in building large scale machine learning or deep learning models and/or systems.\n∙1 year or more of experience specifically with deep learning (CNN, RNN, LSTM, RBM etc).\n∙Strong working knowledge of deep learning, machine learning, and statistics.\n- Deep domain understanding of Personalization, Search and Visual.\n∙Strong math skills with statistical modeling / machine learning.\n∙Hands-on experience building models with deep learning frameworks like MXNet or Tensorflow.\n∙Experience in using Python, statistical/machine learning libs.\n∙Ability to think creatively and solve problems.\n∙Data presentation skills.\n\nPreferred: \n∙MS/ Ph.D. (Machine Learning, Deep Learning, Statistics, Data Mining, Game Theory or core mathematical areas) from IISc and other Top Global Universities.\n∙Or, Publications in highly accredited journals (If available, please share links to your published work.).\n∙Or, history of scaling ML/Deep learning algorithm at massively large scale."
"Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users.\n\nFor this role, credible display of innovation in past projects is a must.\nWe are looking for hands-on leaders in data engineering with the 5-11 year of research/large-scale production implementation experience with:\n- Proven expertise in Spark, Kafka, and Hadoop ecosystem.\n- Rock-solid algorithmic capabilities.\n- Production deployments for massively large-scale systems, real-time personalization, big data analytics and semantic search.\n- Expertise in Containerization (Docker, Kubernetes) and Cloud Infra, preferably OpenStack.\n- Experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes, ElasticSearch/Solr in production.\n\nTier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, DTU, NSIT or MS in Stanford, UC, MIT, CMU, UW–Madison, ETH, top global schools) or exceptionally bright work history is a must. \n\nLet us know if this interests you to explore the profile further."
"-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India.\n\nWe use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal."
"We are building the AI core for a Legal Workflow solution. \n\nYou will be expected to build and train models to extract relevant information from contracts and other legal documents. \n\nRequired Skills/Experience:\n- Python\n- Basics of Deep Learning\n- Experience with one ML framework (like TensorFlow, Keras, Caffee)\n\nPreferred Skills/Expereince:\n- Exposure to ML concepts like LSTM, RNN and Conv Nets\n- Experience with NLP and Stanford POS tagger"