Good point and you're right. For each word in the queue (must from dict except the beginWord), we use two for loops to try all 26*l possible transformations, operations within inner loop is O(1), and there are at most (n + 1) words. So the total time complexity should be O(n*26*l) = O(n*l). Same for bidirectional BFS.
@henryhardcore85846 жыл бұрын
确实应该是26*l, 因为每次只换其中的一个字符
@dongfangshuo15216 жыл бұрын
b is width. 这个例子里面width等于26,指每一位有26个字母的可能性。d是深度。 虽然最后真正被展开的each word必定是从wordlist来,但是判断一个word是否属于wordlist也是需要花时间的,我感觉双向算法是在这里节省了时间,因为on average depth是原来的一半。双向算法里最重要的一行代码是 if (q1.size() > q2.size()) std::swap(q1, q2);, 这个能保证先展开小的一行,所以overhead节省很多。
from collections import deque class Solution: def ladderLength(self, beginWord: str, endWord: str, wordList: List[str]) -> int: # Breadth First Search n = len(beginWord) # Initializing a queue q = deque() word_set = set(wordList) if not endWord in word_set: return 0 step = 1 q.append(beginWord) while len(q) > 0: q_size = len(q) while q_size > 0: curr = q.popleft() if curr in word_set: word_set.remove(curr) q_size -= 1 for i in range(0, n): for ch in list(string.ascii_lowercase): temp = curr[:i] + ch + curr[(i+1):] if temp not in word_set: continue if temp == endWord: return step + 1 q.append(temp) step += 1 return 0
I change the unordered_set to set, and it takes more time to execute the code. Why?
@HuaHuaLeetCode4 жыл бұрын
set is O(logn) unordered_set is O(1) ... when you have fewer keys (
@jmgrn614 жыл бұрын
@@HuaHuaLeetCode 记得题中有You may assume no duplicates in the word list.这个条件,所以应该不需要去重操作。那么请问一开始把wordList转化成unordered_set的目的是什么呢?是不是主要因为vector的“count()”与“erase()”函数比较难用,而unordered_set的相关操作非常方便?
When we do dict.count(w) [line 32], it takes O(1) for unordered set cause unordered set is based on hash table. But set is based on red-black tree, it takes O(logN) to find an element.